Hostinger KVM 8 · Ubuntu 24.04 LTS · Docker · CPU mode

self-hosted-ai-lab
7 AI Tools · 1 Server

// Ollama · Open WebUI · Claude Code · OpenRouter · OpenClaw · Sim.ai · n8n · CPU inference

Plan
KVM 8
Hostinger top tier
vCPU
8 cores
AMD EPYC
RAM
32 GB
DDR4
Storage
400 GB NVMe
vs 100 GB on Kamatera
Inference
CPU mode
Ollama on AMD EPYC
OS
Ubuntu 24.04
Noble Numbat LTS
Price
~$20/mo
vs $286 on Kamatera
Container Architecture · ai-stack Docker network
🦙
Ollama
:11434
💬
Open WebUI
:8080
✦C
Claude Code
CLI only
OpenRouter
API (cloud)
🦞
OpenClaw
:8080
n8n
n8n
:5678
sim
Sim.ai
:3000
🐘
PostgreSQL
:5432
🌐
Hostinger Server Setup
// Create VPS · Install Docker · Configure network & firewall
ResourceHostinger KVM 8Why
vCPU8 cores AMD EPYCParallel container workloads
RAM32 GBSim.ai alone needs 12 GB
Storage400 GB NVMeLLM models + DB growth
OSUbuntu 24.04 LTS (Noble Numbat)cgroup v2 · best Docker support
1
Create Your Hostinger VPS
🔗Go to: hostinger.com/vps-hosting → Choose Plan → KVM 8

During setup select Ubuntu 24.04 LTS as the OS. After provisioning, note your server's public IP from the hPanel dashboard.

💡Hostinger provisions in under 60 seconds. You'll have hPanel access immediately with a built-in Docker Manager GUI included for free.
2
SSH Into Your Server from Mac
bash · your Mac terminal
ssh root@YOUR_SERVER_IP

# Verify Ubuntu version
lsb_release -a
# Expected: Ubuntu 24.04.x LTS

# Confirm cgroup v2 (Ubuntu 24.04 default)
stat -fc %T /sys/fs/cgroup
# Expected output: cgroup2fs  ✓
✏️Replace before running:
PlaceholderWhat to putWhere to find it
YOUR_SERVER_IPYour Hostinger server's public IP addresshPanel → VPS section → click your server → IP shown at top of dashboard (e.g. 82.115.x.x)
3
Update System & Install Essentials
bash
apt update && apt upgrade -y
apt install -y curl wget git nano ufw fail2ban htop
4
Install Docker & Docker Compose
bash
# Official Docker install — works perfectly on Ubuntu 24.04
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
apt install -y docker-compose-plugin

# Verify
docker --version
docker compose version
Ubuntu 24.04 ships cgroup v2 by default — Docker detects it automatically. No manual configuration needed.
5
Create Docker Network & Folder Structure

One shared Docker bridge network lets all containers talk to each other by name — completely internal to the server, no extra Hostinger networking needed. Run each command separately:

bash · command 1 of 3
docker network create ai-stack
bash · command 2 of 3
mkdir -p ~/ai-stack/{ollama,claude-code,openclaw,sim}
bash · command 3 of 3
echo "✅ Network and folders ready"
6
Configure Firewall (UFW)

Run each command separately:

bash · command 1 of 5
ufw allow OpenSSH
bash · command 2 of 5
ufw allow 8080
bash · command 3 of 5
ufw allow 3000
bash · command 4 of 5
ufw allow 11434
bash · command 5 of 5 — enable & verify
ufw --force enable && ufw status
⚠️Restrict Ollama to your IP only in production: ufw allow from YOUR_IP to any port 11434
🦙
Ollama
// Local LLM runner · CPU mode · Port 11434 · ollama/ollama
💡Running in CPU mode on your Hostinger KVM 8. Inference is slower than GPU but fully functional — small models (3B–8B) respond in 5–20 seconds which is fine for personal use and curriculum development.
1
Run Ollama Container (CPU mode)

This is a multi-line command. Create a script file, paste the content, make it executable, then run it:

bash · step 1 — create the script file
touch ~/ai-stack/ollama/run-ollama.sh
bash · step 2 — open in nano editor
nano ~/ai-stack/ollama/run-ollama.sh
bash · step 3 — paste this into nano, then press Ctrl+O to save, Ctrl+X to exit
#!/bin/bash
docker run -d \
  --name ollama \
  --network ai-stack \
  --restart unless-stopped \
  -v ollama-data:/root/.ollama \
  -p 11434:11434 \
  ollama/ollama
bash · step 4 — make executable and run
chmod +x ~/ai-stack/ollama/run-ollama.sh
bash · step 5 — execute the script
bash ~/ai-stack/ollama/run-ollama.sh
2
Pull Models & Test
bash · command 1 — pull a model
docker exec -it ollama ollama pull llama3.2
bash · command 2 — test the API
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt": "Hello World!"
}'
bash · command 3 — list downloaded models
docker exec -it ollama ollama list
💡Other containers reach Ollama at http://ollama:11434 via the ai-stack network.
3
Recommended Models by VRAM
ModelVRAMBest for
llama3.2:3b~2 GBFast responses, low resource
llama3.2:8b~5 GBGood quality general use
mistral:7b~5 GBCoding + reasoning
deepseek-r1:8b~6 GBStrong reasoning tasks
gemma2:9b~7 GBCurriculum & education content
💬
Open WebUI
// ChatGPT-style frontend for Ollama · ghcr.io/open-webui/open-webui · port 8080
Open WebUI Logo
Open WebUI
Extensible, self-hosted AI interface that runs entirely offline · supports Ollama and any OpenAI-compatible API · 90,000+ ⭐ on GitHub
🌐 openwebui.com · 📦 github.com/open-webui · 📚 docs
💡Why install this? Ollama alone is just an API — visiting http://localhost:11434 shows "Ollama is running" on a blank page. Open WebUI is the most popular Ollama frontend (90,000+ GitHub stars in 2026), giving you a polished ChatGPT-like web interface for your local models. Features include conversation history, model switching, document Q&A (RAG), file uploads, multi-user accounts, and an OpenAI-compatible API.
1
Create the Run Script
bash · step 1 — create folder and script
mkdir -p ~/ai-stack/open-webui && nano ~/ai-stack/open-webui/run-open-webui.sh
bash · step 2 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name open-webui \
  --network ai-stack \
  --restart unless-stopped \
  -e OLLAMA_BASE_URL=http://ollama:11434 \
  -e WEBUI_AUTH=true \
  -v open-webui-data:/app/backend/data \
  ghcr.io/open-webui/open-webui:main
💡Notice no -p port mapping. Open WebUI listens on 8080 inside the container, but we don't expose it directly — Caddy reaches it via the internal ai-stack network. Until you're ready for HTTPS (Tab 15), you'll access it via SSH tunnel in Step 4. This is cleaner than exposing port 8080 to the public IP.

Make the script executable:

bash · step 3 — make executable
chmod +x ~/ai-stack/open-webui/run-open-webui.sh
2
Launch Open WebUI Container
bash · launch
bash ~/ai-stack/open-webui/run-open-webui.sh

First boot pulls the image (~600MB) — takes 1-2 minutes. Subsequent launches are instant.

bash · watch boot logs (Ctrl+C to exit)
docker logs open-webui -f

Wait for Uvicorn running on http://0.0.0.0:8080 — that's the ready signal.

3
Verify It's Running
bash · check container is up
docker ps --filter "name=open-webui" --format "{{.Status}}\t{{.Names}}"

Should show Up X seconds.

Verify it can reach Ollama (from inside the container):

bash · test ollama connection from open-webui
docker exec open-webui curl -s http://ollama:11434/api/tags | head -5

Should return JSON with your installed models — confirms network bridge works.

4
Access via SSH Tunnel (Initial Setup)

Until you set up HTTPS in Tab 15, you'll access Open WebUI via SSH tunnel from your Mac. On your Mac terminal:

bash · SSH tunnel from your Mac
ssh -N -L 8080:open-webui:8080 root@187.127.169.30
💡The tunnel target is open-webui:8080 (the container's internal hostname) — SSH will route through the server's Docker network. Replace 187.127.169.30 with your server IP.

Keep that terminal open. Then open in your Mac browser:

browser URL
http://127.0.0.1:8080

Open WebUI sign-up page should load.

🔒After Tab 15 (HTTPS) is complete, you'll access Open WebUI at https://chat.pocketcode.in — no SSH tunnel needed.
5
Create Admin Account (First User)
⚠️The first user to sign up becomes admin. Do this immediately after launching — don't leave the instance accessible without an admin or someone scanning could squat the role.
  1. Click Sign up (not Sign in — there are no existing accounts)
  2. Enter your name, email, and a strong password
  3. Click Sign up → you're now admin
💡Future signups will land in a pending approval queue that only you (admin) can approve. This means even if Open WebUI is publicly accessible, random people can't just create accounts and use your models.
6
Pull a Model & Start Chatting

If you haven't pulled any models yet, do it from inside the Ollama container:

bash · on server — pull a starter model
docker exec ollama ollama pull llama3.2:3b

Or any model from Tab 3. Open WebUI will auto-detect it.

Back in the browser:

  1. Click the model dropdown at the top center of the chat window
  2. Select your model
  3. Type a message and hit Enter — responses stream in real-time
7
Useful Features to Explore
FeatureHow to use
Document Q&A (RAG)Click the 📎 paperclip in chat input → upload PDF/DOCX/MD → model answers questions about it
Knowledge collectionsProfile → Workspace → Knowledge → New collection → upload multiple docs → reference in chat with #collection-name
Custom system promptsClick ⚙️ on a chat → "System prompt" → pin instructions for the conversation
Model parametersSame ⚙️ menu → adjust temperature, top_p, context length per chat
OpenAI-compatible APISettings → Connections → enable API → point other tools at https://chat.pocketcode.in/api/ as if it were OpenAI
Multi-userAdmin panel → Users → invite teammates → approve their signups
💡Embedding model for RAG: For document Q&A to work well, pull a small embedding model: docker exec ollama ollama pull nomic-embed-text. Then in Open WebUI → Settings → Documents → set "Embedding Model" to nomic-embed-text.
🤖
Claude Code
// Anthropic CLI coding agent · @anthropic-ai/claude-code · node:20-slim
🔗Devcontainer docs: code.claude.com/docs/en/devcontainer · Docker sandbox: docs.docker.com
⚠️Requires Anthropic API key from console.anthropic.com
1
Create Dockerfile
bash · step 1 — create the file
touch ~/ai-stack/claude-code/Dockerfile
bash · step 2 — open in nano
nano ~/ai-stack/claude-code/Dockerfile
dockerfile · step 3 — paste this, then Ctrl+O save, Ctrl+X exit
FROM node:20-slim
RUN useradd -m -u 1001 claude
RUN npm install -g @anthropic-ai/claude-code
WORKDIR /workspace
USER claude
ENTRYPOINT ["claude"]
2
Build Image
bash · command 1 — go to claude-code folder
cd ~/ai-stack/claude-code
bash · command 2 — build the image
docker build -t claude-code .
3
Run on a Project

Create a run script for interactive use:

bash · step 1 — create script
touch ~/ai-stack/claude-code/run-claude.sh
bash · step 2 — open in nano
nano ~/ai-stack/claude-code/run-claude.sh
bash · step 3 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -it --rm \
  --name claude-code \
  --network ai-stack \
  -v $(pwd):/workspace \
  -v ai-shared-data:/shared \
  -e ANTHROPIC_API_KEY="sk-ant-xxxx" \
  claude-code
✏️Replace before saving:
PlaceholderWhat to putWhere to get it
sk-ant-xxxxYour Anthropic API keyconsole.anthropic.com → API Keys
bash · step 4 — make executable
chmod +x ~/ai-stack/claude-code/run-claude.sh
bash · step 5 — run from any project folder
bash ~/ai-stack/claude-code/run-claude.sh
💡Full community setup with MCP + persistent config: github.com/VishalJ99/claude-docker
🔀
OpenRouter
// Cloud API — not self-hosted · Provides 200+ models via one API key
⚠️OpenRouter is a cloud service — no Docker image. You use it by pointing tools to https://openrouter.ai/api/v1 with your API key.
1
Get Your API Key

Generate a key and save it. You'll paste it into OpenClaw and Sim.ai settings to give them access to 200+ models including Claude, GPT-4o, Gemini, and more.

2
Optional: LiteLLM Proxy Container

One internal endpoint for all containers instead of hardcoding OpenRouter URLs everywhere:

bash · step 1 — create script
touch ~/ai-stack/run-openrouter-proxy.sh
bash · step 2 — open in nano
nano ~/ai-stack/run-openrouter-proxy.sh
bash · step 3 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name openrouter-proxy \
  --network ai-stack \
  --restart unless-stopped \
  -e OPENROUTER_API_KEY="sk-or-v1-xxxx" \
  -p 4000:4000 \
  ghcr.io/berriai/litellm:main-latest \
  --model openrouter/anthropic/claude-3.5-sonnet \
  --port 4000
✏️Replace before saving:
PlaceholderWhat to putWhere to get it
sk-or-v1-xxxxYour OpenRouter API keyopenrouter.ai/keys
bash · step 4 — make executable and run
chmod +x ~/ai-stack/run-openrouter-proxy.sh && bash ~/ai-stack/run-openrouter-proxy.sh
💡Other containers call this proxy at http://openrouter-proxy:4000 via the ai-stack network.
🦞
OpenClaw
// Personal AI assistant · CLI chat via ClickClack · OpenRouter + Ollama
🔗GitHub: github.com/openclaw/openclaw · Docs: docs.openclaw.ai/install/docker · Pre-built: github.com/phioranex/openclaw-docker
1
Pull the Docker Image
bash
docker pull ghcr.io/phioranex/openclaw-docker:latest
2
Fix Permissions (Required — avoids EACCES error)

The container runs as the node user (UID 1000) but the host directories are owned by root. Pre-create them with open permissions to avoid a permission denied error during onboarding:

bash · command 1 — create required directories
mkdir -p ~/.openclaw/agents/main/agent
bash · command 2 — create workspace directory
mkdir -p ~/.openclaw/workspace
bash · command 3 — open permissions so container can write
chmod -R 777 ~/.openclaw
⚠️Skipping this step causes: Error: EACCES: permission denied, mkdir '/home/node/.openclaw/agents/main/agent'
3
Create & Run the Onboarding Script
bash · step 1 — create script
touch ~/ai-stack/openclaw/onboard-openclaw.sh
bash · step 2 — open in nano
nano ~/ai-stack/openclaw/onboard-openclaw.sh
bash · step 3 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -it --rm \
  -v ~/.openclaw:/home/node/.openclaw \
  -v ~/.openclaw/workspace:/home/node/.openclaw/workspace \
  ghcr.io/phioranex/openclaw-docker:latest onboard
bash · step 4 — make executable and run
chmod +x ~/ai-stack/openclaw/onboard-openclaw.sh && bash ~/ai-stack/openclaw/onboard-openclaw.sh
4
Onboarding Wizard — Answer Guide

The wizard asks several questions. Here are the recommended answers for your setup:

AI Provider

💡Select OpenRouter and enter your sk-or-v1-xxxx key from openrouter.ai/keys. OpenRouter gives you 200+ models on one key and has free tier models — more cost-friendly than direct Anthropic API.

Model

💡Select Auto — OpenRouter picks the best model per request. For free usage, you can also set: meta-llama/llama-3.1-8b-instruct:free

Channel (QuickStart)

💡Select ClickClack — OpenClaw's built-in web chat. No external accounts or bot setup needed. Others (Discord, Telegram, Slack etc.) require extra API tokens.

Search Provider

💡Select DuckDuckGo Search — free, no API key required, works immediately. Others cost money or need extra setup.

Install Missing Skill Dependencies

💡Select Skip for now — install plugins later only when you need them (e.g. github, nano-pdf, obsidian). None are required to get started.

API Keys for Plugins (goplaces, notion, openai-whisper etc.)

💡Select No for all of them — you skipped the plugins so none of these keys are needed.

Enable Hooks

💡Select 📝 command-logger and 💾 session-memory using Spacebar, then Enter. Skip the rest. session-memory lets OpenClaw remember previous conversations.

How to Hatch

💡Select Hatch in Terminal — launches OpenClaw immediately so you can test it right away.
5
Verify OpenClaw is Running

After hatching you should see this status bar at the bottom:

expected output
local ready | idle
agent main | session main | openrouter/openrouter/auto | tokens ?/200k
What it means: local ready = running · idle = waiting for input · openrouter/auto = connected to OpenRouter · 200k = context window. Now just type and press Enter to chat.
💡The ? on tokens just means no messages sent yet — it shows a number once you start chatting.
6
How to Start OpenClaw for Daily Use

After onboarding is done, do not run the onboard script again. Instead create a dedicated start script that launches straight into chat:

bash · command 1 — create start script
touch ~/ai-stack/openclaw/start-openclaw.sh
bash · command 2 — open in nano
nano ~/ai-stack/openclaw/start-openclaw.sh
bash · command 3 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -it --rm \
  -v ~/.openclaw:/home/node/.openclaw \
  -v ~/.openclaw/workspace:/home/node/.openclaw/workspace \
  ghcr.io/phioranex/openclaw-docker:latest chat
⚠️The chat command at the end is required. Without it, Docker just prints the help menu and exits — it does not open the chat interface.
bash · command 4 — make executable
chmod +x ~/ai-stack/openclaw/start-openclaw.sh
bash · command 5 — launch OpenClaw
bash ~/ai-stack/openclaw/start-openclaw.sh
💡Use this script every time you want to chat with OpenClaw. It skips onboarding and goes straight to the terminal UI.
7
How to Exit OpenClaw

There are two ways to exit — use the in-app command first, Ctrl+C only as a last resort:

preferred — type inside OpenClaw chat then press Enter
/exit
alternative
/quit
⚠️If the terminal freezes and Ctrl+C doesn't work, open a new SSH session and run: pkill -f "openclaw" — then close the frozen terminal window.
8
Start OpenClaw as a Background Container (Gateway Mode)

To run OpenClaw as a persistent background gateway service, create a startup script. The gateway run command at the end is required — without it the container just prints the help menu and crashes in a restart loop.

bash · step 1 — create script using heredoc (avoids copy-paste issues)
cat > ~/ai-stack/openclaw/run-openclaw.sh << 'SCRIPT'
#!/bin/bash
docker run -d \
  --name openclaw \
  --network ai-stack \
  --restart unless-stopped \
  -v ~/.openclaw:/home/node/.openclaw \
  -v ~/.openclaw/workspace:/home/node/.openclaw/workspace \
  -p 18789:18789 \
  -e ANTHROPIC_API_KEY="sk-ant-xxxx" \
  -e OPENROUTER_API_KEY="sk-or-v1-xxxx" \
  -e OLLAMA_BASE_URL="http://ollama:11434" \
  -e DATABASE_URL="postgresql://postgres:postgres@sim-db-1:5432/openclaw" \
  -e OPENCLAW_GATEWAY_BIND=lan \
  ghcr.io/phioranex/openclaw-docker:latest gateway run
SCRIPT
⚠️Port 18789 is the only port you need. It serves both the Control UI / Dashboard and the gateway WebSocket. You'll access it via SSH tunnel in Step 9.
📋About port 18791 (browser plugin): Earlier versions of this guide exposed port 18791 for a browser extension. We've removed it — see the "Known Limitation" callout at the bottom of this tab for details.
✏️Edit the keys before running:
PlaceholderWhat to putWhere to get it
sk-ant-xxxxYour Anthropic API key (optional)console.anthropic.com → API Keys
sk-or-v1-xxxxYour OpenRouter API keyopenrouter.ai/keys
Open with nano ~/ai-stack/openclaw/run-openclaw.sh to edit. Use heredoc (above) instead of typing the script line by line — avoids invisible character / backslash continuation errors.
bash · step 2 — make executable and run
chmod +x ~/ai-stack/openclaw/run-openclaw.sh && bash ~/ai-stack/openclaw/run-openclaw.sh
bash · step 3 — verify it is running (not Restarting)
docker ps | grep openclaw
⚠️If it shows Restarting (0) — check logs with docker logs openclaw --tail 30. Most common cause: missing gateway run at the end of the docker image line.
9
Verify Installation — Access Dashboard via SSH Tunnel

This is your final verification — confirms OpenClaw is fully working with browser access. Complete it to make sure the install succeeded end-to-end.

⚠️Why direct browser access doesn't work: OpenClaw's Control UI binds to 127.0.0.1 inside the container (known bug — issue #30990). Docker port mapping cannot forward external traffic to a localhost-only service. The solution is an SSH tunnel + auth token.

Step A — Set bind mode to lan (one-time fix)

The config file overrides env vars, so even with OPENCLAW_GATEWAY_BIND=lan in the script, the saved config defaults to loopback. Override it explicitly:

bash · command 1 — on server SSH session
docker exec openclaw node /app/dist/index.js config set gateway.bind lan
bash · command 2 — restart to apply
docker restart openclaw

Step B — Verify port 18789 is mapped

bash · command 3 — check port mappings
docker port openclaw

Expected output — should show port 18789:

expected output
18789/tcp -> 0.0.0.0:18789
18789/tcp -> [::]:18789
⚠️If port 18789 is missing — your run script doesn't have -p 18789:18789. Re-do Step 8 with the corrected script and redeploy.

Step C — Get your auth token

The dashboard requires a token to authenticate your browser session. The token is generated during onboarding and stored in the config file:

bash · command 4 — print the config file
docker exec openclaw cat /home/node/.openclaw/openclaw.json

Find the gateway.auth.token value — it looks like this:

json · find this in the output
"auth": {
  "mode": "token",
  "token": "5a72ec1c666424130b638942c6fbb55c17132c686391d25e"
}
💡Copy the token value (the long hex string in quotes). You'll paste it into the dashboard URL in Step E.
⚠️Do not use docker exec openclaw node /app/dist/index.js config get gateway.auth.token — it returns __OPENCLAW_REDACTED__ for security. Read the JSON file directly.

Step D — Open SSH tunnel from your Mac

Open a brand new terminal window on your Mac (not your server SSH session) and run:

bash · command 5 — run on your MAC (new terminal)
ssh -N -L 18789:127.0.0.1:18789 root@YOUR_SERVER_IP
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP (from hPanel)
💡Enter your server password. The terminal will appear to hang silently — that's correct. The -N flag means "no command, just forward". Leave this terminal open and untouched. Closing it kills the tunnel.

Step E — Open OpenClaw dashboard in browser with token

Build your URL by replacing YOUR_TOKEN_HERE with the token from Step C:

url — paste into Mac browser address bar
http://127.0.0.1:18789/#token=YOUR_TOKEN_HERE
✏️Replace: YOUR_TOKEN_HERE → the token value from Step C (just the hex string, no quotes)
The OpenClaw dashboard should load. You can now chat with your agent, manage skills, configure channels, and view sessions — all from your Mac browser. Verification complete.

Daily usage

To access OpenClaw later, just repeat Step D (SSH tunnel) and Step E (browser URL with token). The token doesn't expire unless you regenerate it. Save the full URL as a browser bookmark for one-click access.

Troubleshooting

ProblemFix
"This site can't be reached"SSH tunnel terminal is closed. Re-run command 5 on your Mac.
"This page isn't working" / ERR_EMPTY_RESPONSETunnel forwarding to wrong port inside container. Make sure it's 18789:127.0.0.1:18789.
"docker port openclaw" shows nothingScript missing -p 18789:18789. Stop/rm container, fix script (Step 8), redeploy.
Dashboard loads but shows "Unauthorized"Token wrong or missing in URL. Re-copy from Step C, paste after #token=
"address already in use" on Mac tunnelExisting tunnel running. Kill it: pkill -f "18789:127.0.0.1"
Container restarts in a loopCheck docker logs openclaw --tail 30 — usually missing gateway run at end of script
🛑Known Limitation — Browser Extension does NOT work with self-hosted OpenClaw.

If you discover the OpenClaw browser extension (Chrome Web Store) and try to point it at this self-hosted instance, it will fail with errors like:
  • "Empty reply from server" on port 18791
  • "Wrong port: this is likely the gateway, not the relay. Use gateway port + 3"
  • "Relay not reachable/authenticated at http://127.0.0.1:18792/"
The extension expects a "relay" service on port 18792 that exists only in OpenClaw's hosted/cloud edition. It is NOT shipped in self-hosted OpenClaw 2026.5.12 (verified by inspecting the binary — there are zero relay.* config keys).

Recommended path: The official OpenClaw docs recommend "Browser control via node host" instead — a different mechanism where you pair a separate device/node that runs the browser and OpenClaw controls it via CDP. See docs.openclaw.ai → Browser control via node host if you ever need this. For most use cases, the Dashboard + agents + channels work fine without the extension.
🔀
n8n — Workflow Automation
// Self-hosted workflow automation · Port 5678 · Connects all services together
🔗Docs: docs.n8n.io/hosting/docker · Image: hub.docker.com/r/n8nio/n8n
💡What n8n does: Visual workflow builder that connects your AI services together. Trigger workflows on schedules, webhooks, or events — chain Ollama → OpenRouter → PostgreSQL → email/Slack with no code. Think of it as the "glue" for your AI stack.
📋Prerequisites — complete these first:
  • Tab 12 (Databases) Step 2 — Create the n8n database in sim-db-1
  • Tab 11 (Shared Storage) Step 1 — Create the ai-shared-data Docker volume and /root/ai-stack/uploads folder (run script below mounts both)
  • Tab 1 (Server Setup) — The ai-stack Docker network must exist
1
Generate Encryption Key

n8n encrypts all credentials (API keys, passwords) with a master encryption key. Generate one and save it — losing it means losing access to all stored credentials.

bash · command — generate encryption key (copy the output)
openssl rand -hex 32
⚠️Copy this output to a safe place (1Password, secure note, etc.). You'll need it for the run script below.
2
Create Run Script
bash · step 1 — create folder and script
mkdir -p ~/ai-stack/n8n && touch ~/ai-stack/n8n/run-n8n.sh
bash · step 2 — open in nano
nano ~/ai-stack/n8n/run-n8n.sh
bash · step 3 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name n8n \
  --network ai-stack \
  --restart unless-stopped \
  -p 5678:5678 \
  -e DB_TYPE=postgresdb \
  -e DB_POSTGRESDB_HOST=sim-db-1 \
  -e DB_POSTGRESDB_PORT=5432 \
  -e DB_POSTGRESDB_DATABASE=n8n \
  -e DB_POSTGRESDB_USER=postgres \
  -e DB_POSTGRESDB_PASSWORD=postgres \
  -e N8N_ENCRYPTION_KEY="YOUR_ENCRYPTION_KEY_HERE" \
  -e N8N_HOST=YOUR_SERVER_IP \
  -e N8N_PORT=5678 \
  -e N8N_PROTOCOL=http \
  -e WEBHOOK_URL=http://YOUR_SERVER_IP:5678/ \
  -e GENERIC_TIMEZONE=Asia/Kolkata \
  -e N8N_RUNNERS_ENABLED=true \
  -v n8n-data:/home/node/.n8n \
  -v ai-shared-data:/shared \
  -v /root/ai-stack/uploads:/uploads \
  n8nio/n8n
✏️Replace before saving:
PlaceholderWhat to putWhere to get it
YOUR_ENCRYPTION_KEY_HEREThe hex string from Step 1Output of openssl rand -hex 32
YOUR_SERVER_IPYour Hostinger server IP (appears 2 times)hPanel → VPS dashboard
Asia/KolkataYour timezone (optional)Change if you're outside India — e.g. America/New_York, Europe/London
Leave database, network, and volume settings exactly as shown.
3
Run n8n
🔒No firewall rule needed. We're accessing n8n via SSH tunnel (next step) so port 5678 stays closed to the public internet — more secure. The container still binds to 0.0.0.0:5678 internally so other Docker containers on ai-stack can reach it at http://n8n:5678.
bash · command 1 — make executable and run
chmod +x ~/ai-stack/n8n/run-n8n.sh && bash ~/ai-stack/n8n/run-n8n.sh
bash · command 2 — verify running
docker ps | grep n8n

Expected — must show Up X seconds with port mapping:

expected output
...   n8nio/n8n   ...   Up 8 seconds   0.0.0.0:5678->5678/tcp   n8n
⚠️If Restarting (1) — check logs: docker logs n8n --tail 30. Common causes: wrong encryption key format, can't reach sim-db-1, n8n database doesn't exist in PostgreSQL.
💡If you ever want external webhook access (for Slack, Discord, GitHub triggers), open port 5678 later with ufw allow 5678 AND add -e N8N_SECURE_COOKIE=false to the run script.
4
Access Web UI via SSH Tunnel & Create Owner Account
🔒Why SSH tunnel? n8n enforces secure cookies by default — browsers refuse to set the auth cookie over plain HTTP unless the host is localhost/127.0.0.1 (treated as a secure origin). Direct access via http://YOUR_SERVER_IP:5678 would show: "Your n8n server is configured to use a secure cookie..." error and block login. SSH tunnel solves this AND keeps port 5678 closed to the public internet.

On your Mac terminal (NOT the server SSH session), open a tunnel:

bash · run on Mac terminal — keep this running
ssh -N -L 5678:127.0.0.1:5678 root@YOUR_SERVER_IP
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP. Leave the terminal running (no prompt will appear — that's correct, the tunnel is active).

Then open in Chrome / Safari on your Mac:

url — open in Mac browser
http://127.0.0.1:5678

On first launch, n8n shows an account creation page. Fill in:

FieldWhat to enter
EmailYour email (real or test — used for password reset)
First / Last nameYour name
PasswordStrong password — 8+ chars, mixed case, number
After signup you land on the n8n workflow canvas. The self-hosted Community Edition is free forever with unlimited workflows and executions.
💡Daily use: open the SSH tunnel each time you want to use n8n. To make it easier, add a Mac terminal alias: echo 'alias n8n-tunnel="ssh -N -L 5678:127.0.0.1:5678 root@YOUR_SERVER_IP"' >> ~/.zshrc — then just run n8n-tunnel from any Mac terminal.
5
Set Up Credentials for Your AI Stack

In n8n, click Personal in the left sidebar → switch to the Credentials tab → Create Credential (top right). Add credentials for the services you'll use in workflows:

💡Alternative path: when building a workflow, click any service node (Ollama, Postgres, etc.) and use the "Credential to connect with" dropdown → + Create new credential. Same result, created inline while you build.

A. Ollama credential

Search for "Ollama" in the credential type picker, then:

FieldValue
Base URLhttp://ollama:11434

B. OpenRouter credential (OpenAI-compatible)

Search for "OpenAI" credential type (OpenRouter uses the OpenAI API spec):

FieldValue
API KeyYour sk-or-v1-xxxx key
Base URLhttps://openrouter.ai/api/v1

C. PostgreSQL credential (shared database)

Search for "Postgres" credential type:

FieldValue
Hostsim-db-1
Databaseailab (or whichever you need)
Userpostgres
Passwordpostgres
Port5432
SSLDisable

D. Anthropic credential (optional)

FieldValue
API KeyYour sk-ant-xxxx key
💡Click "Test" before saving each credential — n8n will verify it can actually connect. If Postgres test fails, check that sim-db-1 is on the ai-stack network (Tab 12 Step 1).
6
Build Your First Workflow — Test the Stack

A quick test workflow to confirm everything works together. Click + Add workflow in n8n.

  1. Add a Manual Trigger node (click + → search "manual")
  2. Add an Ollama node → connect after Manual Trigger
    • Credential: Pick the Ollama credential from Step 5A
    • Operation: Generate Text
    • Model: llama3.2 (or whatever you've pulled)
    • Prompt: Say hello to the world
  3. Add a Postgres node → connect after Ollama
    • Credential: Pick the Postgres credential from Step 5C
    • Operation: Execute Query
    • Query: INSERT INTO ai_outputs (source, data) VALUES ('n8n', '{{ JSON.stringify($json) }}')
  4. Click Execute Workflow
If both nodes show green checkmarks, your full stack works: n8n → Ollama → PostgreSQL. You can now build any automation chaining all your services.
⚠️If the Postgres step fails with "relation ai_outputs does not exist", you haven't created the table yet — go to Tab 12 Step 5 first.
7
Useful n8n Workflow Patterns for Your Stack
PatternNodes to chainUse case
RAG pipelineRead File → Ollama Embed → Postgres (pgvector) InsertEmbed docs from /uploads for semantic search
Scheduled summaryCron → Postgres Query → OpenRouter (Claude) → EmailDaily AI-generated reports
Webhook → AIWebhook → OpenRouter → Slack/DiscordExternal apps trigger AI responses
File processingWatch /uploads → Ollama → Write to /sharedAuto-process uploaded files
OpenClaw notifierPostgres trigger → HTTP Request to OpenClawOpenClaw notifies you on DB events
💡n8n has 400+ pre-built nodes — Gmail, Slack, GitHub, Notion, etc. Check the node library in the workflow canvas.
🧪
Sim.ai (SimStudio)
// Visual AI agent workflow builder · Port 3000 · Redis + PostgreSQL required
🔗GitHub: github.com/simstudioai/sim · Docs: docs.sim.ai/self-hosting/docker
⚠️Sim.ai requires minimum 2 vCPU · 12 GB RAM · 20 GB storage. Hostinger KVM 8 covers this with plenty of headroom.
1
Clone the Repository
bash · command 1 — go to sim folder
cd ~/ai-stack/sim
bash · command 2 — clone the repo
git clone https://github.com/simstudioai/sim.git .
bash · command 3 — confirm docker-compose.prod.yml is present
ls
2
Add Redis to docker-compose.prod.yml (Required)

Sim.ai's realtime container requires Redis. The default compose file does not include it — you must add it manually, otherwise sim-realtime-1 will fail with getaddrinfo ESERVFAIL.

bash — open compose file
nano ~/ai-stack/sim/docker-compose.prod.yml

Edit 1 — Find the realtime service's depends_on block and add redis to it:

yaml · find this (realtime depends_on)
    depends_on:
      db:
        condition: service_healthy
    healthcheck:
      test: ['CMD', 'curl', '-fsS', 'http://127.0.0.1:3002/health']
yaml · replace with this
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ['CMD', 'curl', '-fsS', 'http://127.0.0.1:3002/health']

Edit 2 — Find the volumes: section at the very bottom and add the Redis service above it. The indentation must be exactly 2 spaces:

yaml · find this (bottom of file)
volumes:
  postgres_data:
yaml · replace with this (2 spaces before redis:)
  redis:
    image: redis:alpine
    restart: unless-stopped
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  postgres_data:
⚠️Indentation is critical in YAML. The redis: line must have exactly 2 spaces before it — same as db: and realtime:. Zero spaces causes a validation error.

Save with Ctrl+O then Ctrl+X.

3
Create .env File

Generate your 3 secret keys first — run each command separately and copy each output:

bash · command 1 — generate BETTER_AUTH_SECRET
openssl rand -hex 32
bash · command 2 — generate ENCRYPTION_KEY
openssl rand -hex 32
bash · command 3 — generate INTERNAL_API_SECRET
openssl rand -hex 32
bash · command 4 — create and open .env
touch ~/ai-stack/sim/.env && nano ~/ai-stack/sim/.env
config · command 5 — paste this entire block into nano, then Ctrl+O save, Ctrl+X exit
DATABASE_URL=postgresql://postgres:postgres@db:5432/simstudio
BETTER_AUTH_SECRET=PASTE_KEY_1_HERE
ENCRYPTION_KEY=PASTE_KEY_2_HERE
INTERNAL_API_SECRET=PASTE_KEY_3_HERE
NEXT_PUBLIC_APP_URL=http://YOUR_SERVER_IP:3000
BETTER_AUTH_URL=http://YOUR_SERVER_IP:3000
ANTHROPIC_API_KEY=sk-ant-xxxx
OPENROUTER_API_KEY=sk-or-v1-xxxx
OLLAMA_URL=http://ollama:11434
REDIS_URL=redis://redis:6379
✏️Replace every placeholder before saving:
PlaceholderWhat to putWhere to get it
PASTE_KEY_1_HEREOutput of command 1Copy from terminal output above
PASTE_KEY_2_HEREOutput of command 2Copy from terminal output above
PASTE_KEY_3_HEREOutput of command 3Copy from terminal output above
YOUR_SERVER_IPYour Hostinger server IPhPanel → VPS dashboard → IP shown at top
sk-ant-xxxxYour Anthropic API keyconsole.anthropic.com → API Keys
sk-or-v1-xxxxYour OpenRouter API keyopenrouter.ai/keys
Leave DATABASE_URL, OLLAMA_URL and REDIS_URL exactly as shown.
⚠️Use OLLAMA_URL not OLLAMA_BASE_URL — the compose file reads OLLAMA_URL. Using the wrong variable name causes Sim.ai to fall back to localhost:11434 which doesn't work inside Docker.
4
Connect Ollama to Sim.ai's Network

Sim.ai runs on the sim_default Docker network but Ollama runs on ai-stack. They need to be on the same network. Connect Ollama to Sim.ai's network:

bash
docker network connect sim_default ollama
💡This adds Ollama to both networks simultaneously — it stays on ai-stack for OpenClaw and is also reachable from sim_default for Sim.ai. Run this after docker compose up creates the sim_default network.
⚠️This command must be re-run after every docker compose down because that destroys the sim_default network and recreates it fresh on the next up.
5
Start All Services
bash · command 1 — start Sim.ai stack
docker compose -f docker-compose.prod.yml up -d

You should see all 6 containers start successfully:

expected output
✔ sim-redis-1      Healthy
✔ sim-db-1         Healthy
✔ sim-migrations-1 Exited   ← correct, runs once then exits
✔ sim-realtime-1   Healthy
✔ sim-simstudio-1  Started
⚠️If sim-realtime-1 shows Error — check Redis is defined in the compose file with correct 2-space indentation (Step 2) and REDIS_URL=redis://redis:6379 is in your .env (Step 3).
bash · command 2 — connect Ollama to Sim.ai network
docker network connect sim_default ollama
bash · command 3 — restart simstudio to pick up Ollama connection
docker compose -f docker-compose.prod.yml restart simstudio
bash · command 4 — watch logs to confirm ready
docker compose -f docker-compose.prod.yml logs -f
💡Press Ctrl+C to stop watching logs — it does not stop the containers.
6
Create Your Account & Log In

Open Sim.ai in your browser:

url — open in your Mac browser
http://YOUR_SERVER_IP:3000
⚠️Click Sign Up — not Sign In. No account exists yet. Signing in first causes a "User not found" error in the logs.
💡GitHub and Google login show warnings in logs — this is harmless. They're not configured. Use email + password signup instead.
7
Warnings to Ignore

These warnings appear in logs on every startup — all are harmless:

WarningMeaningAction
COPILOT_API_KEY variable is not setGitHub Copilot integration — optionalIgnore
SIM_AGENT_API_URL variable is not setOptional agent URLIgnore
Social provider github is missing clientIdGitHub OAuth not configuredIgnore — use email login
Social provider google is missing clientIdGoogle OAuth not configuredIgnore — use email login
Redis does not require authenticationRedis has no passwordFine — Redis is not exposed to internet
Memory overcommit must be enabledRedis performance warningIgnore for personal use
8
Stop & Restart Sim.ai
bash · stop all containers
docker compose -f docker-compose.prod.yml down
bash · start again
docker compose -f docker-compose.prod.yml up -d
bash · re-connect Ollama after every restart
docker network connect sim_default ollama
⚠️docker compose down destroys the sim_default network. You must run docker network connect sim_default ollama again after every restart — otherwise Ollama models won't be visible in Sim.ai.
Launch & Verify All Services
// Start every container · Check each UI in browser · Confirm working before moving on
💡Stop here and complete this entire tab before moving on to the advanced tabs (Inter-Service, Shared Storage, Databases). This is your sanity check that every service installed in Tabs 3-8 is actually running and accessible.
1
Start All Background Services on Server

Run these on your server SSH session in order. Each service must start successfully before moving to the next.

A. Start Ollama

bash · command 1 — launch Ollama container
bash ~/ai-stack/ollama/run-ollama.sh
bash · command 2 — verify Ollama running
docker ps | grep ollama

B. Start OpenClaw (Gateway Mode)

bash · command 3 — launch OpenClaw
bash ~/ai-stack/openclaw/run-openclaw.sh
bash · command 4 — verify OpenClaw running (must show Up + both ports)
docker ps | grep openclaw

C. Start n8n

bash · command 5 — launch n8n
bash ~/ai-stack/n8n/run-n8n.sh
bash · command 6 — verify n8n running
docker ps | grep n8n

D. Start Sim.ai (with Redis, PostgreSQL, Realtime)

bash · command 7 — start Sim.ai full stack
cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml up -d
bash · command 8 — connect Ollama to Sim.ai's network
docker network connect sim_default ollama 2>/dev/null || echo "Already connected — OK"
💡If you see endpoint with name ollama already exists — that's fine, Ollama is already connected. The || echo suppresses the error. Move on.
bash · command 9 — restart simstudio so it picks up Ollama
docker compose -f ~/ai-stack/sim/docker-compose.prod.yml restart simstudio

E. Final check — all containers running

bash · command 10 — list all containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

Expected output — all should be Up:

expected output
NAMES              STATUS         PORTS
ollama             Up X minutes   0.0.0.0:11434->11434/tcp
openclaw           Up X minutes   0.0.0.0:18789->18789/tcp
n8n                Up X minutes   0.0.0.0:5678->5678/tcp
sim-redis-1        Up X minutes   6379/tcp
sim-db-1           Up X minutes   0.0.0.0:5432->5432/tcp
sim-realtime-1     Up X minutes   0.0.0.0:3002->3002/tcp
sim-simstudio-1    Up X minutes   0.0.0.0:3000->3000/tcp
⚠️If any container is missing or Restarting, fix that one first using its dedicated tab (Tabs 3-8) before continuing to Step 2.
2
Verify Ollama — Test the API

From server SSH, test the Ollama API. It should respond with a generation:

bash · test Ollama API on server
curl http://localhost:11434/api/generate -d '{"model":"llama3.2","prompt":"Hello!","stream":false}'
If you get a JSON response with generated text — Ollama works.
⚠️If error: model not found — pull it first: docker exec -it ollama ollama pull llama3.2

Optionally, from your Mac browser visit (it shows "Ollama is running"):

url — Mac browser
http://YOUR_SERVER_IP:11434
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP
3
Verify OpenClaw — Dashboard via SSH Tunnel

OpenClaw requires SSH tunnel + auth token (see Tab 6 Step 9 for the full explanation of the bug).

A. Get auth token (on server)

bash · server — read token from config file
docker exec openclaw cat /home/node/.openclaw/openclaw.json | grep -A3 '"auth"'

Look for the gateway.auth block (first match — the second match is for OpenRouter profiles, ignore that one). It will look like:

expected output — copy the hex string in the "token" line
    "auth": {
      "mode": "token",
      "token": "5a72ec1c666424130b638942c6fbb55c17132c686391d25e"
    }

B. Open SSH tunnel from Mac (new terminal)

bash · run on your MAC
ssh -N -L 18789:127.0.0.1:18789 root@YOUR_SERVER_IP
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP. Leave terminal silently open after password.

C. Open Dashboard in Mac browser

url — paste in Mac browser
http://127.0.0.1:18789/#token=YOUR_TOKEN_HERE
✏️Replace: YOUR_TOKEN_HERE → the hex string from Step A
OpenClaw Dashboard loads, lets you chat with the agent, manage skills and channels. Save this URL as a browser bookmark for daily access.
4
Verify n8n — SSH Tunnel + 127.0.0.1

n8n enforces secure cookies — accessing via http://YOUR_SERVER_IP:5678 blocks login with: "Your n8n server is configured to use a secure cookie...". Use an SSH tunnel — browsers treat 127.0.0.1 as a secure origin.

On your Mac terminal (separate from server SSH):

bash · run on Mac terminal — keep this running
ssh -N -L 5678:127.0.0.1:5678 root@YOUR_SERVER_IP
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP. No prompt = tunnel is active.

Then open in your Mac browser:

url — open in Mac browser
http://127.0.0.1:5678
First time you'll see a setup wizard asking you to create an owner account. After signup, you land on the n8n canvas. The Community edition is free with unlimited workflows.
5
Verify Sim.ai — Direct Browser Access

Sim.ai binds to 0.0.0.0:3000 — no SSH tunnel needed, direct browser access works.

url — open in Mac browser
http://YOUR_SERVER_IP:3000
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP
⚠️Click Sign Up (not Sign In) for the first time — no account exists yet. Signing in shows "User not found" error.
After signup, you land on the Sim.ai workflow canvas. Try creating a new workflow and dragging an AI block in to confirm models work.
6
Verify Claude Code — Test in Project

Claude Code is a CLI tool — verify by running it on a test project:

bash · command 1 — create test folder on server
mkdir -p ~/test-project && cd ~/test-project && echo "console.log('hello')" > app.js
bash · command 2 — run Claude Code
bash ~/ai-stack/claude-code/run-claude.sh
Claude Code launches interactively in your terminal. Type a question like "What does app.js do?" — if it responds correctly, Claude Code is working.
7
Verify OpenRouter — Test API Key

OpenRouter is a cloud API — no container to check. Verify your key works:

bash · test OpenRouter API on server
curl https://openrouter.ai/api/v1/chat/completions \
  -H "Authorization: Bearer sk-or-v1-xxxx" \
  -H "Content-Type: application/json" \
  -d '{"model":"openrouter/auto","messages":[{"role":"user","content":"Hello!"}]}'
✏️Replace: sk-or-v1-xxxx → your real OpenRouter API key
If you get a JSON response with an assistant message — your OpenRouter key is valid and being used by OpenClaw + Sim.ai correctly.
8
Final Checklist Before Moving On
ServiceHow to verify✓ Working?
Ollamacurl http://YOUR_IP:11434 → "Ollama is running"
OpenClaw DashboardSSH tunnel + browser http://127.0.0.1:18789/#token=...
n8nSSH tunnel + browser http://127.0.0.1:5678
Sim.aiBrowser http://YOUR_IP:3000 → signup works
Claude CodeTerminal launches, responds to prompt
OpenRoutercurl returns JSON with assistant message
All six checked? Continue to Tab 10 — Inter-Service Comms to wire services together properly.
⚠️Any failing? Go back to the specific service's tab (3-8) and fix it. Do not skip ahead — the advanced tabs assume everything here is working.
9
Daily Startup — Quick Reference

After server reboots or maintenance, restart everything in this order:

bash · command 1 — restart all (containers with --restart flag come back automatically)
docker ps -a
bash · command 2 — if Sim.ai stack is down, restart it
cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml up -d
bash · command 3 — reconnect Ollama to Sim.ai network (safe if already connected)
docker network connect sim_default ollama 2>/dev/null || echo "Already connected — OK"
bash · command 4 — verify everything is up
docker ps --format "table {{.Names}}\t{{.Status}}"
🔗
Inter-Service Communication
// How containers talk to each other · Docker network · Environment variables
💡All containers are on the ai-stack Docker network. They reach each other using the container name as hostname — no IP addresses needed. Docker handles DNS resolution automatically.
1
Container Name → URL Reference Map

Use these URLs inside any container to reach another service on the ai-stack network:

internal urls · use inside containers (container-to-container)
Ollama           → http://ollama:11434
n8n              → http://n8n:5678
PostgreSQL       → postgresql://postgres:postgres@sim-db-1:5432/DB_NAME
OpenRouter proxy → http://openrouter-proxy:4000
Redis            → redis://redis:6379

# Sim.ai's own containers use db:5432 (sim_default network only)
# Other ai-stack containers use sim-db-1:5432
# Replace DB_NAME with: ailab | simstudio | openclaw | ollama_results | n8n

For accessing services from your Mac browser, here's the access method per service:

external urls · open in Mac browser
Sim.ai           → http://YOUR_SERVER_IP:3000           (direct - binds to 0.0.0.0)
n8n              → http://127.0.0.1:5678                 (SSH tunnel required ⚠️ - secure cookie)
pgAdmin          → http://YOUR_SERVER_IP:5050           (direct - binds to 0.0.0.0)
Ollama API       → http://YOUR_SERVER_IP:11434          (direct - binds to 0.0.0.0)
OpenClaw UI      → http://127.0.0.1:18789/#token=TOKEN  (SSH tunnel + token required ⚠️)
⚠️OpenClaw Dashboard binds to 127.0.0.1 inside the container on port 18789 (known bug). It cannot be reached directly via http://YOUR_SERVER_IP:18789. You must use an SSH tunnel + auth token — covered in Step 3 and Step 7.
2
Verify Two Containers Can Talk

Test that containers on the ai-stack network can reach each other:

bash · command 1 — install ping tool in a test container
docker run -it --rm --network ai-stack alpine ping -c 3 ollama
If you see ping replies, containers can communicate. If it fails, check that ollama container is running with docker ps.
3
Connect OpenClaw → Ollama

OpenClaw connects to Ollama via the OLLAMA_BASE_URL environment variable in its run script. If the script doesn't exist yet, create it. If it does exist, just open it to verify.

bash · command 1 — create file if it doesn't exist
touch ~/ai-stack/openclaw/run-openclaw.sh
bash · command 2 — open in nano
nano ~/ai-stack/openclaw/run-openclaw.sh

Paste this entire script — confirm OLLAMA_BASE_URL points to http://ollama:11434:

bash · command 3 — paste this complete script, Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name openclaw \
  --network ai-stack \
  --restart unless-stopped \
  -v ~/.openclaw:/home/node/.openclaw \
  -v ~/.openclaw/workspace:/home/node/.openclaw/workspace \
  -p 18789:18789 \
  -e ANTHROPIC_API_KEY="sk-ant-xxxx" \
  -e OPENROUTER_API_KEY="sk-or-v1-xxxx" \
  -e OLLAMA_BASE_URL="http://ollama:11434" \
  -e DATABASE_URL="postgresql://postgres:postgres@sim-db-1:5432/openclaw" \
  -e OPENCLAW_GATEWAY_BIND=lan \
  ghcr.io/phioranex/openclaw-docker:latest gateway run
✏️Replace before saving:
PlaceholderWhat to putWhere to get it
sk-ant-xxxxYour Anthropic API keyconsole.anthropic.com → API Keys
sk-or-v1-xxxxYour OpenRouter API keyopenrouter.ai/keys
Leave OLLAMA_BASE_URL, DATABASE_URL exactly as shown.
bash · command 4 — make executable
chmod +x ~/ai-stack/openclaw/run-openclaw.sh

If OpenClaw is already running, restart it to apply changes; otherwise start it fresh:

bash · command 5 — if already running, restart it
docker stop openclaw && docker rm openclaw && bash ~/ai-stack/openclaw/run-openclaw.sh
bash · command 6 — verify it is running (not Restarting)
docker ps | grep openclaw
💡Both containers must be on the ai-stack network for this to work. Verify with: docker network inspect ai-stack
⚠️The gateway run at the end of the image line is required. Without it, the container prints the help menu and crashes in a restart loop.

Access OpenClaw Dashboard from your Mac browser

OpenClaw's Control UI binds to 127.0.0.1 inside the container (known bug — issue #30990) so direct browser access won't work. Use SSH tunnel + auth token.

bash · command 7 — set bind to lan (overrides config default)
docker exec openclaw node /app/dist/index.js config set gateway.bind lan && docker restart openclaw
bash · command 8 — read config to find your auth token
docker exec openclaw cat /home/node/.openclaw/openclaw.json

Find "token": "..." inside the gateway.auth section and copy that value.

⚠️Do not use config get gateway.auth.token — it returns __OPENCLAW_REDACTED__ for security. Always read the JSON file directly.
bash · command 9 — open a NEW terminal on your Mac (not the server)
ssh -N -L 18789:127.0.0.1:18789 root@YOUR_SERVER_IP
✏️Replace: YOUR_SERVER_IP → your Hostinger server IP. Enter password when prompted, then leave the terminal silently open.

Open this URL in your Mac browser (replace token):

url — paste into Mac browser address bar
http://127.0.0.1:18789/#token=YOUR_TOKEN_HERE
✏️Replace: YOUR_TOKEN_HERE → the token value from command 8 output (just the hex string, no quotes)
💡Ctrl+C in the SSH tunnel terminal closes it. The tunnel is encrypted — no firewall changes needed, no public exposure of OpenClaw. Save the full URL as a browser bookmark for one-click access.
4
Connect Sim.ai → Ollama + PostgreSQL

Sim.ai connects to Ollama and PostgreSQL via its .env file. If the file doesn't exist yet, create it; if it does, open to verify.

bash · command 1 — create file if it doesn't exist
touch ~/ai-stack/sim/.env
bash · command 2 — open in nano
nano ~/ai-stack/sim/.env

Paste this complete .env file — the inter-service URLs are highlighted:

config · command 3 — paste this complete .env, Ctrl+O save, Ctrl+X exit
DATABASE_URL=postgresql://postgres:postgres@db:5432/simstudio
BETTER_AUTH_SECRET=PASTE_KEY_1_HERE
ENCRYPTION_KEY=PASTE_KEY_2_HERE
INTERNAL_API_SECRET=PASTE_KEY_3_HERE
NEXT_PUBLIC_APP_URL=http://YOUR_SERVER_IP:3000
BETTER_AUTH_URL=http://YOUR_SERVER_IP:3000
ANTHROPIC_API_KEY=sk-ant-xxxx
OPENROUTER_API_KEY=sk-or-v1-xxxx
OLLAMA_URL=http://ollama:11434
REDIS_URL=redis://redis:6379
✏️Replace every placeholder before saving:
PlaceholderWhat to putWhere to get it
PASTE_KEY_1_HEREGenerated keyRun openssl rand -hex 32 on server
PASTE_KEY_2_HEREGenerated keyRun openssl rand -hex 32 on server
PASTE_KEY_3_HEREGenerated keyRun openssl rand -hex 32 on server
YOUR_SERVER_IPYour Hostinger server IPhPanel → VPS dashboard
sk-ant-xxxxYour Anthropic API keyconsole.anthropic.com
sk-or-v1-xxxxYour OpenRouter API keyopenrouter.ai/keys
Leave DATABASE_URL, OLLAMA_URL, REDIS_URL exactly as shown.
⚠️Use OLLAMA_URL not OLLAMA_BASE_URL — the Sim.ai compose file reads OLLAMA_URL. The wrong name causes it to fall back to localhost:11434 which doesn't work inside Docker.

After saving, connect Ollama to Sim.ai's network and restart simstudio:

bash · command 4 — connect Ollama to sim_default network
docker network connect sim_default ollama
bash · command 5 — restart simstudio to apply env changes
docker compose -f ~/ai-stack/sim/docker-compose.prod.yml restart simstudio
bash · command 6 — verify simstudio is running
docker ps | grep simstudio
💡Sim.ai runs on sim_default network, Ollama runs on ai-stack. Connecting Ollama to sim_default puts it on both networks so Sim.ai can reach it. This must be re-run after every docker compose down.
5
Add a New Container to ai-stack Network

When you want to add a new tool to your AI stack (Qdrant, Grafana, n8n, etc.), use this pattern. Every new container must include --network ai-stack to join the shared network.

📍Where do YOUR_CONTAINER_NAME and YOUR_IMAGE come from? They come from the documentation of the tool you're adding — typically the tool's GitHub README or Docker Hub page tells you the recommended container name and the image to use.
PlaceholderWhat to putWhere to find it
YOUR_CONTAINER_NAMEFriendly name you pick (no spaces, lowercase)You choose — e.g. qdrant, n8n, grafana
YOUR_IMAGEDocker image identifierTool's Docker Hub page or GitHub README — e.g. qdrant/qdrant, n8nio/n8n
YOURCONTAINER (in script name)Same as your container nameJust makes the script file easy to identify later

Concrete example — adding Qdrant (vector database)

Suppose you want to add Qdrant. Looking at its Docker Hub: image is qdrant/qdrant, default port is 6333. Container name choice: qdrant. Here's the full flow:

bash · step 1 — create script
touch ~/ai-stack/run-qdrant.sh
bash · step 2 — open in nano
nano ~/ai-stack/run-qdrant.sh
bash · step 3 — paste this, Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name qdrant \
  --network ai-stack \
  --restart unless-stopped \
  -p 6333:6333 \
  -v qdrant-data:/qdrant/storage \
  qdrant/qdrant
bash · step 4 — make executable and run
chmod +x ~/ai-stack/run-qdrant.sh && bash ~/ai-stack/run-qdrant.sh

Other containers can now reach Qdrant at http://qdrant:6333 on the ai-stack network.

Generic template (for any new tool)

Replace the placeholders with values from your tool's documentation:

bash · generic template
#!/bin/bash
docker run -d \
  --name YOUR_CONTAINER_NAME \
  --network ai-stack \
  --restart unless-stopped \
  -p HOST_PORT:CONTAINER_PORT \
  -v YOUR_VOLUME:/path/inside/container \
  YOUR_IMAGE
⚠️If you forget --network ai-stack, the container lands on Docker's default bridge network and cannot reach any other service by name.

For an already-running container

If a container is already running and you forgot to add it to the network, connect it without restarting:

bash · connect existing container to ai-stack
docker network connect ai-stack YOUR_CONTAINER_NAME
6
Inspect the Network & Connected Containers
bash · command 1 — list all containers on ai-stack
docker network inspect ai-stack
bash · command 2 — see all running containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
7
External Browser Access — SSH Tunnel from Mac
⚠️Two reasons a service needs an SSH tunnel:
  1. Binds to 127.0.0.1 inside container — Docker port mapping cannot forward to localhost-only services. Affects OpenClaw's Control UI (issue #30990).
  2. Enforces secure cookies — browsers refuse to set auth cookies over plain HTTP unless the host is localhost/127.0.0.1. Affects n8n (default behavior; can be disabled but tunnel is more secure).
The reliable solution for both: an SSH tunnel from your Mac to the server.

For OpenClaw Dashboard (port 18789):

bash · run on your MAC terminal
ssh -N -L 18789:127.0.0.1:18789 root@YOUR_SERVER_IP

Then open in your Mac browser (replace token from ~/.openclaw/openclaw.json on server):

url — Mac browser
http://127.0.0.1:18789/#token=YOUR_TOKEN_HERE

Universal pattern for any container service:

bash · template — run on your Mac
ssh -N -L LOCAL_PORT:127.0.0.1:CONTAINER_PORT root@YOUR_SERVER_IP
ServiceSSH tunnel command (run on Mac)Then open in browser
OpenClaw Dashboardssh -N -L 18789:127.0.0.1:18789 root@IPhttp://127.0.0.1:18789/#token=TOKEN
n8n (secure cookie)ssh -N -L 5678:127.0.0.1:5678 root@IPhttp://127.0.0.1:5678
Ollama API (private)ssh -N -L 11434:127.0.0.1:11434 root@IPhttp://127.0.0.1:11434
💡Why this is safer: No ports exposed to the public internet, no firewall changes needed, and traffic is encrypted via SSH. The -N flag means "no command, just forward" so it doesn't open a shell — leave the terminal open while you use the service. Ctrl+C closes the tunnel.
💡For services that bind to 0.0.0.0 (Sim.ai on 3000, pgAdmin on 5050, OpenClaw gateway WebSocket on 8080) you don't need a tunnel — direct browser access works: http://YOUR_SERVER_IP:PORT
📁
Shared File Storage
// Docker volumes · Shared bind mounts · File access across containers
💡There are two ways to share data between containers: Named volumes (Docker manages the path) and Bind mounts (you choose the path on the server). Both are mounted at container start with -v.
1
Create a Shared Volume for All Services

Create one named volume that any container can mount to read and write shared files:

bash · command 1 — create shared volume
docker volume create ai-shared-data
bash · command 2 — verify it exists
docker volume ls
2
Mount Shared Volume Into Each Running Container

Now you'll add the shared volume to your existing containers so they can all read/write the same files. Below are explicit edit instructions for each service in your stack.

📌The flag you're adding everywhere is: -v ai-shared-data:/shared \ — place it before the image name line in each run script. The trailing \ is required for bash line continuation.

2A. Add to Ollama

bash · command 1 — open run script
nano ~/ai-stack/ollama/run-ollama.sh

Find this block and add the highlighted line right before ollama/ollama:

bash · updated run-ollama.sh (line to add highlighted)
#!/bin/bash
docker run -d \
  --name ollama \
  --network ai-stack \
  --restart unless-stopped \
  -v ollama-data:/root/.ollama \
  -p 11434:11434 \
  -v ai-shared-data:/shared \
  ollama/ollama

Save with Ctrl+O then Ctrl+X. Then redeploy:

bash · command 2 — restart container with new mount
docker stop ollama && docker rm ollama && bash ~/ai-stack/ollama/run-ollama.sh

2B. Add to OpenClaw

bash · command 1 — open run script
nano ~/ai-stack/openclaw/run-openclaw.sh

Add -v ai-shared-data:/shared \ after the other -v lines, before the image name:

bash · updated run-openclaw.sh (excerpt — line to add highlighted)
  -v ~/.openclaw:/home/node/.openclaw \
  -v ~/.openclaw/workspace:/home/node/.openclaw/workspace \
  -v ai-shared-data:/shared \
  -p 18789:18789 \
  ...
  ghcr.io/phioranex/openclaw-docker:latest gateway run

Save and redeploy:

bash · command 2 — restart container
docker stop openclaw && docker rm openclaw && bash ~/ai-stack/openclaw/run-openclaw.sh

2C. Add to OpenRouter Proxy

bash · command 1 — open run script
nano ~/ai-stack/run-openrouter-proxy.sh
bash · updated run-openrouter-proxy.sh (line to add highlighted)
#!/bin/bash
docker run -d \
  --name openrouter-proxy \
  --network ai-stack \
  --restart unless-stopped \
  -e OPENROUTER_API_KEY="sk-or-v1-xxxx" \
  -p 4000:4000 \
  -v ai-shared-data:/shared \
  ghcr.io/berriai/litellm:main-latest \
  --model openrouter/anthropic/claude-3.5-sonnet \
  --port 4000
bash · command 2 — restart container
docker stop openrouter-proxy && docker rm openrouter-proxy && bash ~/ai-stack/run-openrouter-proxy.sh

2D. Add to Qdrant

bash · command 1 — open run script
nano ~/ai-stack/run-qdrant.sh
bash · updated run-qdrant.sh (line to add highlighted)
#!/bin/bash
docker run -d \
  --name qdrant \
  --network ai-stack \
  --restart unless-stopped \
  -p 6333:6333 \
  -v qdrant-data:/qdrant/storage \
  -v ai-shared-data:/shared \
  qdrant/qdrant
bash · command 2 — restart container
docker stop qdrant && docker rm qdrant && bash ~/ai-stack/run-qdrant.sh

2E. n8n (already included in run script)

n8n's run script in Tab 7 Step 2 already includes -v ai-shared-data:/shared and -v /root/ai-stack/uploads:/uploads — no edit needed. To verify:

bash · verify n8n mounts
docker inspect n8n --format '{{range .Mounts}}{{.Source}} → {{.Destination}}{{"\n"}}{{end}}'

Expected output should include both /shared and /uploads:

expected output
/var/lib/docker/volumes/n8n-data/_data → /home/node/.n8n
/var/lib/docker/volumes/ai-shared-data/_data → /shared
/root/ai-stack/uploads → /uploads
💡If you set up n8n before creating ai-shared-data volume or /root/ai-stack/uploads folder, restart n8n now: docker stop n8n && docker rm n8n && bash ~/ai-stack/n8n/run-n8n.sh

2F. Add to Sim.ai stack (docker-compose)

Sim.ai runs via docker-compose, so the edit goes in the YAML file (not a run script). You'll add a volumes: mapping under each Sim.ai service.

bash · command 1 — open compose file
nano ~/ai-stack/sim/docker-compose.prod.yml

Inside the simstudio: service block, add this anywhere (a good place is right after ports:):

yaml · add inside simstudio service
    volumes:
      - ai-shared-data:/shared

Repeat the same edit inside the realtime: service block.

Then at the very bottom of the file, declare the external volume so docker-compose recognizes the name:

yaml · find this at bottom of file
volumes:
  postgres_data:
yaml · change it to
volumes:
  postgres_data:
  ai-shared-data:
    external: true
⚠️external: true tells compose to use the volume you already created with docker volume create ai-shared-data in Step 1 — not to create a new one with a prefixed name.

Save with Ctrl+O then Ctrl+X. Then restart the Sim.ai stack:

bash · command 2 — restart Sim.ai with new mounts
cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml down && docker compose -f docker-compose.prod.yml up -d
bash · command 3 — re-connect Ollama to sim_default network
docker network connect sim_default ollama 2>/dev/null || echo "Already connected — OK"

2G. Verify shared volume works across containers

Write a file from Ollama's container, then read it from OpenClaw to confirm they share the same volume:

bash · command 1 — write from Ollama container
docker exec ollama sh -c "echo 'Hello from Ollama' > /shared/test.txt"
bash · command 2 — read from OpenClaw container
docker exec openclaw cat /shared/test.txt
If you see Hello from Ollama printed — the shared volume is working across all containers that mount it.

Skipped services (no shared volume needed)

ServiceWhy skipped
sim-redis-1Infrastructure cache — doesn't process user files
sim-db-1Database — uses its own postgres_data volume
3
Add a Bind Mount Folder (Alternative — for SCP'd Files)

Step 2 used a named Docker volume (managed by Docker, hidden under /var/lib/docker). This step adds a bind mount at a real path on the server — useful when you want to SCP files directly from your Mac into the shared folder.

📌This is mounted at a different path (/uploads) so it lives alongside the Step 2 named volume (/shared), not replacing it. Each container can have both — use /shared for container-to-container data and /uploads for files you put there via SCP.

3A. Create the server folder

bash · command 1 — create folder
mkdir -p /root/ai-stack/uploads
bash · command 2 — open permissions for containers
chmod 777 /root/ai-stack/uploads

3B. Add the bind mount to Ollama

bash · open run-ollama.sh
nano ~/ai-stack/ollama/run-ollama.sh

Add the highlighted bind mount line before the image name:

bash · updated run-ollama.sh
#!/bin/bash
docker run -d \
  --name ollama \
  --network ai-stack \
  --restart unless-stopped \
  -v ollama-data:/root/.ollama \
  -p 11434:11434 \
  -v ai-shared-data:/shared \
  -v /root/ai-stack/uploads:/uploads \
  ollama/ollama
bash · restart container
docker stop ollama && docker rm ollama && bash ~/ai-stack/ollama/run-ollama.sh

3C. Add the bind mount to OpenClaw

bash · open run-openclaw.sh
nano ~/ai-stack/openclaw/run-openclaw.sh

Add -v /root/ai-stack/uploads:/uploads \ after the other -v lines:

bash · excerpt — line to add highlighted
  -v ~/.openclaw:/home/node/.openclaw \
  -v ~/.openclaw/workspace:/home/node/.openclaw/workspace \
  -v ai-shared-data:/shared \
  -v /root/ai-stack/uploads:/uploads \
  -p 18789:18789 \
  ...
bash · restart container
docker stop openclaw && docker rm openclaw && bash ~/ai-stack/openclaw/run-openclaw.sh

3D. Add the bind mount to Sim.ai stack (docker-compose)

bash · open compose file
nano ~/ai-stack/sim/docker-compose.prod.yml

Under simstudio: and again under realtime:, find the existing volumes: block (added in Step 2E) and add the new bind mount:

yaml · updated volumes inside simstudio and realtime services
    volumes:
      - ai-shared-data:/shared
      - /root/ai-stack/uploads:/uploads
bash · restart Sim.ai stack
cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml down && docker compose -f docker-compose.prod.yml up -d
bash · reconnect Ollama network
docker network connect sim_default ollama 2>/dev/null || echo "Already connected — OK"

3E. Test from your Mac — SCP a file

From your Mac terminal, copy any file into the uploads folder:

bash · run on your MAC (replace path to your file)
scp ~/Desktop/test.pdf root@YOUR_SERVER_IP:/root/ai-stack/uploads/

On the server, confirm it's visible in a container:

bash · server — verify file visible inside Ollama container
docker exec ollama ls /uploads
You should see test.pdf listed. Now any container with the bind mount can read files you SCP into /root/ai-stack/uploads/ from your Mac.
4
Give Other Containers Read-Only Access to Ollama Models

Ollama stores downloaded models in the ollama-data volume. Other containers can mount it read-only to inspect models without using the Ollama API (useful for debugging, custom inference, or backup tools).

📌This is only useful for tools that need to directly read model files. None of your existing services (Sim.ai, OpenClaw, OpenRouter, Qdrant) need this — they call Ollama via API instead. Use this when adding custom tools later.

4A. Quick inspection (no script needed)

Spin up a temporary Alpine container to list all downloaded models:

bash · one-liner — list Ollama's models from another container
docker run -it --rm -v ollama-data:/root/.ollama:ro alpine ls /root/.ollama/models

Or check total disk size used by models:

bash · check disk usage of Ollama models
docker run -it --rm -v ollama-data:/root/.ollama:ro alpine du -sh /root/.ollama/models

4B. Add to a future custom inference tool (example)

If you build a custom tool that reads Ollama's model files directly, here's the run script pattern:

bash · create your custom tool's run script
touch ~/ai-stack/run-mytool.sh && nano ~/ai-stack/run-mytool.sh
bash · paste this template, Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name mytool \
  --network ai-stack \
  --restart unless-stopped \
  -v ollama-data:/root/.ollama:ro \
  -v ai-shared-data:/shared \
  -p 8090:8090 \
  YOUR_IMAGE
✏️Replace:
PlaceholderWhat to put
mytoolYour tool's container name
YOUR_IMAGEYour tool's Docker image (e.g. ghcr.io/yourorg/yourtool:latest)
8090:8090Your tool's port mapping (or remove if no UI)
⚠️The :ro suffix is critical — it means read-only. Without it, your custom tool could accidentally corrupt or delete Ollama's downloaded models.

Skipped services

ServiceWhy not added
OpenClaw, Sim.aiAlready use Ollama via API at http://ollama:11434 — no direct file access needed
QdrantStores vectors only, doesn't read embedding models
OpenRouter proxyForwards to cloud APIs, doesn't use local Ollama models
5
List & Inspect All Volumes
bash · command 1 — list all volumes
docker volume ls
bash · command 2 — inspect a volume (see actual path)
docker volume inspect ai-shared-data
bash · command 3 — browse bind-mount uploads folder (from Step 3)
ls -la /root/ai-stack/uploads/
🗄️
Databases — SQL · NoSQL · Vector
// Use Sim.ai's existing sim-db-1 as the shared PostgreSQL · pgvector built-in · JSONB · pgAdmin
💡Architecture decision: Sim.ai's compose file already runs pgvector/pgvector:pg17 as the sim-db-1 container — which is the exact same PostgreSQL + pgvector image we need. Running a second postgres container conflicts on port 5432 and wastes RAM. We'll use sim-db-1 as the shared database for everything.
⚠️Network note: sim-db-1 lives on the sim_default network. Containers on ai-stack (OpenClaw, Qdrant, etc.) cannot reach it by default — we'll connect sim-db-1 to ai-stack too so it's accessible from both networks.
1
Connect sim-db-1 to ai-stack Network

Add sim-db-1 to the ai-stack network so OpenClaw, Qdrant, and any future containers can reach it by name:

bash · command 1 — connect sim-db-1 to ai-stack
docker network connect ai-stack sim-db-1 2>/dev/null || echo "Already connected — OK"
bash · command 2 — verify it is on both networks
docker inspect sim-db-1 --format '{{range $k,$v := .NetworkSettings.Networks}}{{$k}} {{end}}'

Expected output:

expected output
ai-stack sim_default
💡Now sim-db-1 is reachable from both networks:
  • From ai-stack → hostname is sim-db-1
  • From sim_default → hostname is db (compose alias)
2
Create Additional Databases for Each Service

sim-db-1 already has the simstudio database (used by Sim.ai). Create the additional databases for OpenClaw, Ollama output storage, and general use:

bash · command 1 — connect to postgres as superuser
docker exec -it sim-db-1 psql -U postgres
sql · command 2 — create 4 new databases (run inside psql)
CREATE DATABASE ailab;
CREATE DATABASE openclaw;
CREATE DATABASE ollama_results;
CREATE DATABASE n8n;
\l

You should see all 5 databases listed: ailab, n8n, openclaw, ollama_results, simstudio (plus postgres internal ones).

sql · command 3 — enable pgvector extension in each database
\c ailab
CREATE EXTENSION IF NOT EXISTS vector;
\c simstudio
CREATE EXTENSION IF NOT EXISTS vector;
\c openclaw
CREATE EXTENSION IF NOT EXISTS vector;
\c ollama_results
CREATE EXTENSION IF NOT EXISTS vector;
\c n8n
CREATE EXTENSION IF NOT EXISTS vector;
\q
💡\c dbname switches to that database · \l lists all databases · \q exits psql. The vector extension is part of the pgvector/pgvector:pg17 image — no separate install needed.
3
Connection Strings for Each Service

Hostname depends on which Docker network the calling container is on:

config · connection strings per service
# Sim.ai (.env file — runs on sim_default network)
DATABASE_URL=postgresql://postgres:postgres@db:5432/simstudio

# OpenClaw (-e flag in run-openclaw.sh — runs on ai-stack network)
DATABASE_URL=postgresql://postgres:postgres@sim-db-1:5432/openclaw

# n8n (env vars in run-n8n.sh — uses split format, NOT a single URL)
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=sim-db-1
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=postgres
DB_POSTGRESDB_PASSWORD=postgres

# n8n Postgres credential (inside n8n UI — fill these fields)
Host: sim-db-1     Database: ailab (or any)     User: postgres
Pass: postgres     Port: 5432                   SSL: Disable

# Qdrant or any ai-stack container needing the ailab db
DATABASE_URL=postgresql://postgres:postgres@sim-db-1:5432/ailab

# For Ollama output storage / your own apps (on ai-stack)
DATABASE_URL=postgresql://postgres:postgres@sim-db-1:5432/ollama_results

# Custom Python/Node apps (running anywhere — pick host by network)
# Python:  psycopg2.connect("postgresql://postgres:postgres@sim-db-1:5432/ailab")
# Node:    new Pool({ host: 'sim-db-1', port: 5432, user: 'postgres', ... })

# Redis (Sim.ai realtime queue — sim_default network only)
REDIS_URL=redis://redis:6379

# From your Mac (external — for pgAdmin, DBeaver, TablePlus, etc.)
# Works for ANY database — swap the name at the end:
DATABASE_URL=postgresql://postgres:postgres@YOUR_SERVER_IP:5432/ailab
DATABASE_URL=postgresql://postgres:postgres@YOUR_SERVER_IP:5432/simstudio
DATABASE_URL=postgresql://postgres:postgres@YOUR_SERVER_IP:5432/openclaw
DATABASE_URL=postgresql://postgres:postgres@YOUR_SERVER_IP:5432/n8n
DATABASE_URL=postgresql://postgres:postgres@YOUR_SERVER_IP:5432/ollama_results
✏️Replace in the last line only:
PlaceholderWhat to putWhere to find it
YOUR_SERVER_IPYour Hostinger server's public IPhPanel → VPS dashboard → IP shown at top
⚠️Hostname rules:
  • db — works only inside sim_default network (Sim.ai's own containers)
  • sim-db-1 — works inside ai-stack network (OpenClaw, Qdrant, custom tools)
  • Default username + password are both postgres (from Sim.ai compose config)
4
SQL — Standard Relational Tables

Works exactly like standard PostgreSQL. Example using the ailab database:

bash · connect to ailab db
docker exec -it sim-db-1 psql -U postgres -d ailab
sql · create and query a table
CREATE TABLE curriculum_modules (
  id         SERIAL PRIMARY KEY,
  title      TEXT NOT NULL,
  category   TEXT,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

INSERT INTO curriculum_modules (title, category)
VALUES ('Docker Basics', 'Cloud Computing');

SELECT * FROM curriculum_modules;
5
NoSQL — Document Storage with JSONB

JSONB stores flexible JSON documents — query them with SQL operators. No schema required per document:

sql · create and query a JSONB table
CREATE TABLE ai_outputs (
  id         SERIAL PRIMARY KEY,
  source     TEXT,
  data       JSONB,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX ON ai_outputs USING GIN (data);

INSERT INTO ai_outputs (source, data) VALUES (
  'ollama',
  '{"model":"llama3.2","prompt":"Hello","response":"Hi there!","tokens":42}'
);

SELECT data->>'model'    AS model,
       data->>'response' AS response
FROM   ai_outputs
WHERE  source = 'ollama';
💡-> returns JSON · ->> returns text · @> checks if JSON contains a value · GIN index makes JSONB queries fast.
6
Vector DB — Embeddings & Semantic Search

pgvector adds a vector column type for storing AI embeddings. Used for semantic search, RAG pipelines, and similarity matching:

sql · create embeddings table
CREATE TABLE embeddings (
  id        SERIAL PRIMARY KEY,
  content   TEXT,
  source    TEXT,
  embedding vector(768)
);

CREATE INDEX ON embeddings
  USING ivfflat (embedding vector_cosine_ops)
  WITH (lists = 100);
sql · semantic similarity search
SELECT content,
       1 - (embedding <=> '[0.1,0.2,0.3,...]') AS similarity
FROM   embeddings
ORDER  BY embedding <=> '[0.1,0.2,0.3,...]'
LIMIT  5;
💡Embedding dimensions: Ollama nomic-embed-text → 768 · OpenAI text-embedding-3-small → 1536. Match vector(N) to your model.
7
Install pgAdmin — Visual Database Manager

Browser-based GUI to manage all databases visually — no command line needed for day-to-day queries:

bash · step 1 — create script
touch ~/ai-stack/run-pgadmin.sh
bash · step 2 — open in nano
nano ~/ai-stack/run-pgadmin.sh
bash · step 3 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name pgadmin \
  --network ai-stack \
  --restart unless-stopped \
  -e PGADMIN_DEFAULT_EMAIL=admin@ailab.com \
  -e PGADMIN_DEFAULT_PASSWORD=admin123 \
  -v pgadmin-data:/var/lib/pgadmin \
  -p 5050:80 \
  dpage/pgadmin4
✏️Replace before saving:
PlaceholderWhat to putNotes
admin@ailab.comAny email you want to use as loginThis is your pgAdmin login email — doesn't need to be real
admin123A strong password of your choiceUsed to log into pgAdmin at port 5050 — change this to something secure
bash · step 4 — make executable and run
chmod +x ~/ai-stack/run-pgadmin.sh && bash ~/ai-stack/run-pgadmin.sh
bash · step 5 — open firewall for pgAdmin
ufw allow 5050

Open http://YOUR_SERVER_IP:5050 in your Mac browser → login with email + password above → click Add New Server and fill in:

config · pgAdmin "Add Server" settings
General tab:
  Name:     Shared PostgreSQL

Connection tab:
  Host:     sim-db-1
  Port:     5432
  Username: postgres
  Password: postgres
  Save password: ✓
💡Once connected, you'll see all 4 databases (ailab, simstudio, openclaw, ollama_results) in pgAdmin's tree. Expand any to browse tables, run queries, manage indexes, etc.
8
Quick Reference — Service → Database Map
reference · service → database
Service          Database         Connection (from container's network)
─────────────────────────────────────────────────────────────────────
Sim.ai           simstudio        postgresql://postgres:postgres@db:5432/simstudio
OpenClaw         openclaw         postgresql://postgres:postgres@sim-db-1:5432/openclaw
n8n              n8n              postgresql://postgres:postgres@sim-db-1:5432/n8n
Ollama outputs   ollama_results   postgresql://postgres:postgres@sim-db-1:5432/ollama_results
General / lab    ailab            postgresql://postgres:postgres@sim-db-1:5432/ailab
pgAdmin (UI)     all of the above http://YOUR_SERVER_IP:5050
─────────────────────────────────────────────────────────────────────
Container hostnames:
  db          → from sim_default network only (Sim.ai's own containers)
  sim-db-1    → from ai-stack network (OpenClaw, n8n, Qdrant, custom tools)
  YOUR_IP     → from your Mac/external

Username / Password: postgres / postgres  (change in production)
⚠️Change the default password postgres to something strong in production: ALTER USER postgres WITH PASSWORD 'your_strong_password'; Then update DATABASE_URL in Sim.ai .env and OpenClaw run script.
⚙️
Service Management & Diagnostics
// 3 unified scripts to start, stop, and diagnose your entire AI stack · One-command shortcuts
💡The pattern: instead of running 10+ commands every time you want to start/stop or check your stack, you'll have 3 aliases: ai-start, ai-stop, ai-doctor. Each runs a single script that handles the whole orchestration.
1
Create the Management Folder
bash · create folder for management scripts
mkdir -p ~/ai-stack/manage

All three scripts will live in ~/ai-stack/manage/.

2
Start Script — Launch Everything in Correct Order
📋Why order matters: Sim.ai stack must start first (it provides sim-db-1 which is the shared PostgreSQL). Then network bridges (sim-db-1 → ai-stack, ollama → sim_default). Then dependent services (OpenClaw, n8n need DB). Finally restart Sim.ai so it picks up the connected Ollama.
bash · step 1 — create the script file
touch ~/ai-stack/manage/start-all.sh && nano ~/ai-stack/manage/start-all.sh
bash · step 2 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
# Start all AI Lab services in correct sequence
set +e  # don't exit on individual service failures

# Colors
G='\033[0;32m'; Y='\033[1;33m'; R='\033[0;31m'; B='\033[0;34m'; BOLD='\033[1m'; DIM='\033[2m'; NC='\033[0m'

log()  { echo -e "${B}[$(date +%H:%M:%S)]${NC} $1"; }
ok()   { echo -e "  ${G}✓${NC} $1"; }
warn() { echo -e "  ${Y}⚠${NC}  $1"; }
fail() { echo -e "  ${R}✗${NC} $1"; }

# Helper: start container if exists, else run its create script
start_or_create() {
  local name="$1"
  local script="$2"
  if docker ps --format '{{.Names}}' | grep -q "^${name}$"; then
    ok "$name already running"
  elif docker ps -a --format '{{.Names}}' | grep -q "^${name}$"; then
    docker start "$name" >/dev/null && ok "$name started" || fail "$name failed to start"
  elif [ -n "$script" ] && [ -f "$script" ]; then
    bash "$script" >/dev/null 2>&1 && ok "$name created from script" || fail "$name script failed"
  else
    warn "$name not found, no create script"
  fi
}

echo ""
echo -e "${BOLD}╔════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║   Starting AI Lab Stack                ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════╝${NC}"

# 1. Ensure ai-stack network exists
log "Checking ai-stack network..."
if docker network ls --format '{{.Name}}' | grep -q '^ai-stack$'; then
  ok "ai-stack network exists"
else
  docker network create ai-stack >/dev/null && ok "ai-stack network created"
fi

# 2. Start Sim.ai compose stack (provides sim-db-1, redis, realtime, simstudio)
log "Starting Sim.ai stack (DB, Redis, Realtime, Simstudio)..."
if [ -f ~/ai-stack/sim/docker-compose.prod.yml ]; then
  cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml up -d >/dev/null 2>&1 \
    && ok "Sim.ai compose up" || fail "Sim.ai compose failed"
fi

# 3. Connect sim-db-1 to ai-stack network (so OpenClaw, n8n, etc. can reach it)
log "Bridging networks..."
docker network connect ai-stack sim-db-1 2>/dev/null \
  && ok "sim-db-1 connected to ai-stack" || ok "sim-db-1 already on ai-stack"

# 4. Start Ollama
log "Starting Ollama..."
start_or_create ollama ~/ai-stack/ollama/run-ollama.sh

# 5. Connect Ollama to sim_default (so Sim.ai can reach it)
docker network connect sim_default ollama 2>/dev/null \
  && ok "ollama connected to sim_default" || ok "ollama already on sim_default"

# 6. Restart simstudio to pick up Ollama
log "Restarting simstudio to pick up Ollama..."
docker compose -f ~/ai-stack/sim/docker-compose.prod.yml restart simstudio >/dev/null 2>&1 \
  && ok "simstudio restarted"

# 7. Start OpenRouter proxy
log "Starting OpenRouter proxy..."
start_or_create openrouter-proxy ~/ai-stack/openrouter/run-openrouter-proxy.sh

# 8. Start OpenClaw
log "Starting OpenClaw..."
start_or_create openclaw ~/ai-stack/openclaw/run-openclaw.sh

# 9. Start n8n
log "Starting n8n..."
start_or_create n8n ~/ai-stack/n8n/run-n8n.sh

# 10. Start Qdrant
log "Starting Qdrant..."
start_or_create qdrant ~/ai-stack/run-qdrant.sh

# 11. Start pgAdmin
log "Starting pgAdmin..."
start_or_create pgadmin ~/ai-stack/run-pgadmin.sh

# 12. Start Open WebUI (ChatGPT-style UI for Ollama)
log "Starting Open WebUI..."
start_or_create open-webui ~/ai-stack/open-webui/run-open-webui.sh

echo ""
log "All services launched. Status:"
echo ""
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

# ═══ Auto-detect server IP and OpenClaw token ═══
SERVER_IP=$(curl -s -4 --max-time 3 ifconfig.me 2>/dev/null)
[ -z "$SERVER_IP" ] && SERVER_IP=$(hostname -I | awk '{print $1}')
[ -z "$SERVER_IP" ] && SERVER_IP="YOUR_SERVER_IP"

OPENCLAW_TOKEN=""
if docker ps --format '{{.Names}}' | grep -q '^openclaw$'; then
  OPENCLAW_TOKEN=$(docker exec openclaw cat /home/node/.openclaw/openclaw.json 2>/dev/null \
    | grep -oE '"token"[[:space:]]*:[[:space:]]*"[^"]+"' | head -1 \
    | sed -E 's/.*"([^"]+)"$/\1/')
fi

# ═══ SSH TUNNEL COMMANDS ═══
echo ""
echo -e "${BOLD}╔════════════════════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║   SSH TUNNELS — Run these on your Mac terminal         ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${BOLD}Option A — All tunnels in ONE command (recommended):${NC}"
echo -e "${DIM}Run this once, keep the terminal open, all 3 services accessible.${NC}"
echo ""
echo -e "${G}  ssh -N \\"
echo "    -L 5678:127.0.0.1:5678 \\"
echo "    -L 8080:open-webui:8080 \\"
echo -e "    -L 18789:127.0.0.1:18789 \\"
echo -e "    root@${SERVER_IP}${NC}"
echo ""
echo -e "${BOLD}Option B — Individual tunnels (one per Mac terminal):${NC}"
echo ""
echo -e "${DIM}# n8n (workflow automation)${NC}"
echo -e "${G}  ssh -N -L 5678:127.0.0.1:5678 root@${SERVER_IP}${NC}"
echo ""
echo -e "${DIM}# Open WebUI (Ollama chat)${NC}"
echo -e "${G}  ssh -N -L 8080:open-webui:8080 root@${SERVER_IP}${NC}"
echo ""
echo -e "${DIM}# OpenClaw Dashboard${NC}"
echo -e "${G}  ssh -N -L 18789:127.0.0.1:18789 root@${SERVER_IP}${NC}"

# ═══ BROWSER URLS ═══
echo ""
echo -e "${BOLD}╔════════════════════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║   BROWSER URLS — Open these in Chrome / Safari         ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${BOLD}🟢 Direct access (no tunnel needed):${NC}"
echo ""
echo -e "  ${G}Sim.ai${NC}             →  http://${SERVER_IP}:3000"
echo -e "  ${G}pgAdmin${NC}            →  http://${SERVER_IP}:5050"
echo -e "  ${G}Qdrant Dashboard${NC}   →  http://${SERVER_IP}:6333/dashboard"
echo -e "  ${G}Ollama API${NC}         →  http://${SERVER_IP}:11434  ${DIM}(API only, not a UI)${NC}"
echo ""
echo -e "${BOLD}🔒 Via SSH tunnel (start tunnels above first):${NC}"
echo ""
echo -e "  ${Y}n8n${NC}                →  http://127.0.0.1:5678"
echo -e "  ${Y}Open WebUI${NC}         →  http://127.0.0.1:8080  ${DIM}(tunnel: -L 8080:open-webui:8080)${NC}"
if [ -n "$OPENCLAW_TOKEN" ]; then
  echo -e "  ${Y}OpenClaw Dashboard${NC} →  http://127.0.0.1:18789/#token=${OPENCLAW_TOKEN}"
else
  echo -e "  ${Y}OpenClaw Dashboard${NC} →  http://127.0.0.1:18789/#token=${R}TOKEN_NOT_FOUND${NC}"
  echo -e "    ${DIM}↳ Get token: docker exec openclaw cat /home/node/.openclaw/openclaw.json | grep token${NC}"
fi
echo ""
echo -e "${BOLD}═════════════════════════════════════════════════════════${NC}"
echo -e "${G}✓ Stack ready. Run 'ai-doctor' for full diagnostics.${NC}"
echo ""

Save with Ctrl+O, Enter, then Ctrl+X.

💡The script uses start_or_create helper — if the container exists it just docker starts it (fast), if it's missing it runs the create script (slower, but only first time after a docker rm).
3
Stop Script — Graceful Shutdown in Reverse Order
bash · step 1 — create the script file
touch ~/ai-stack/manage/stop-all.sh && nano ~/ai-stack/manage/stop-all.sh
bash · step 2 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
# Stop all AI Lab services gracefully (reverse order of start)

G='\033[0;32m'; Y='\033[1;33m'; B='\033[0;34m'; BOLD='\033[1m'; NC='\033[0m'

log() { echo -e "${B}[$(date +%H:%M:%S)]${NC} $1"; }
ok()  { echo -e "  ${G}✓${NC} $1"; }
skip(){ echo -e "  ${Y}-${NC} $1 (not running)"; }

stop_if_running() {
  local name="$1"
  if docker ps --format '{{.Names}}' | grep -q "^${name}$"; then
    docker stop "$name" >/dev/null && ok "Stopped $name"
  else
    skip "$name"
  fi
}

echo ""
echo -e "${BOLD}╔════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║   Stopping AI Lab Stack                ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════╝${NC}"

# Stop dependent services first (reverse of start order)
log "Stopping dependent services..."
stop_if_running caddy
stop_if_running open-webui
stop_if_running pgadmin
stop_if_running qdrant
stop_if_running n8n
stop_if_running openclaw
stop_if_running openrouter-proxy
stop_if_running ollama

# Stop Sim.ai compose stack last (it has the shared DB)
log "Stopping Sim.ai compose stack..."
if [ -f ~/ai-stack/sim/docker-compose.prod.yml ]; then
  cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml stop >/dev/null 2>&1 \
    && ok "Sim.ai compose stack stopped"
fi

echo ""
log "All AI Lab services stopped. Remaining:"
echo ""
docker ps --format "table {{.Names}}\t{{.Status}}"
echo ""
echo -e "${G}✓ Shutdown complete. Run 'ai-start' to bring everything back.${NC}"

Save with Ctrl+O, Enter, then Ctrl+X.

⚠️This uses docker stop (graceful, sends SIGTERM with 10s timeout) — not docker kill. Containers can finish writing data before exiting. Use this instead of docker stop $(docker ps -q) which stops everything including unrelated containers.
4
Diagnose Script — Deep Health Check with Logs

This script gives a full health dashboard: container status, network membership, disk usage, recent logs per service, and an error scan.

bash · step 1 — create the script file
touch ~/ai-stack/manage/diagnose.sh && nano ~/ai-stack/manage/diagnose.sh
bash · step 2 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
# AI Lab stack diagnostics — full health check

G='\033[0;32m'; Y='\033[1;33m'; R='\033[0;31m'; B='\033[0;34m'; BOLD='\033[1m'; DIM='\033[2m'; NC='\033[0m'

header() {
  echo ""
  echo -e "${BOLD}${B}═══ $1 ═══${NC}"
}

check_status() {
  local name="$1"
  if docker ps --format '{{.Names}}' | grep -q "^${name}$"; then
    local status=$(docker ps --format '{{.Status}}' --filter "name=^${name}$")
    echo -e "  ${G}● UP${NC}       $name  ${DIM}($status)${NC}"
  elif docker ps -a --format '{{.Names}}' | grep -q "^${name}$"; then
    echo -e "  ${Y}● STOPPED${NC}  $name"
  else
    echo -e "  ${R}● MISSING${NC}  $name"
  fi
}

# ═══ HEADER ═══
echo ""
echo -e "${BOLD}╔════════════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║       AI Lab Stack Diagnostics                 ║${NC}"
echo -e "${BOLD}║       $(date '+%Y-%m-%d %H:%M:%S')                      ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════════════╝${NC}"

# ═══ CONTAINER STATUS ═══
header "Container Status"
services=(ollama openclaw n8n openrouter-proxy qdrant pgadmin sim-db-1 sim-redis-1 sim-realtime-1 sim-simstudio-1 open-webui)
for svc in "${services[@]}"; do
  check_status "$svc"
done

# ═══ DOCKER NETWORKS ═══
header "Docker Networks"
echo -e "${BOLD}ai-stack:${NC}"
docker network inspect ai-stack --format '{{range .Containers}}  • {{.Name}}{{"\n"}}{{end}}' 2>/dev/null | sort | uniq || echo "  ${R}Network missing!${NC}"
echo -e "${BOLD}sim_default:${NC}"
docker network inspect sim_default --format '{{range .Containers}}  • {{.Name}}{{"\n"}}{{end}}' 2>/dev/null | sort | uniq || echo "  ${R}Network missing!${NC}"

# ═══ PORT BINDINGS ═══
header "Exposed Ports"
docker ps --format "table {{.Names}}\t{{.Ports}}" | grep -v "^NAMES"

# ═══ DISK & RESOURCES ═══
header "System Resources"
echo -e "${BOLD}Disk:${NC}"
df -h / | tail -1 | awk '{print "  Root: " $3 " / " $2 "  (" $5 " used)"}'
echo -e "${BOLD}Memory:${NC}"
free -h | grep Mem | awk '{print "  RAM:  " $3 " / " $2 "  (" int($3/$2*100) "% used)"}'
echo -e "${BOLD}Docker:${NC}"
docker system df | grep -v "^TYPE" | awk '{printf "  %-12s %s used / %s total\n", $1, $4, $3}'

# ═══ RECENT LOGS ═══
header "Recent Logs (last 5 lines per service)"
for svc in ollama openclaw n8n sim-simstudio-1 sim-db-1 sim-realtime-1; do
  if docker ps --format '{{.Names}}' | grep -q "^${svc}$"; then
    echo ""
    echo -e "${BOLD}── $svc ──${NC}"
    docker logs "$svc" --tail 5 2>&1 | sed 's/^/    /'
  fi
done

# ═══ ERROR SCAN ═══
header "Error Scan (last 100 lines per service)"
for svc in ollama openclaw n8n sim-simstudio-1 sim-db-1 sim-realtime-1 qdrant openrouter-proxy; do
  if docker ps --format '{{.Names}}' | grep -q "^${svc}$"; then
    errors=$(docker logs "$svc" --tail 100 2>&1 | grep -iE "(^|[[:space:]])(error|fatal|panic|exception)[: ]" | grep -ivE "no error|0 error|errorlevel|error_log|error-level|no such file|relation \".*\" does not exist|database \".*\" does not exist|role \".*\" does not exist|duplicate key|invalid input syntax for type vector|terminating connection due to administrator command|Failed to load model catalog|getaddrinfo EAI_AGAIN" | wc -l)
    if [ "$errors" -gt 0 ]; then
      echo -e "  ${R}⚠${NC}  $svc: ${R}$errors${NC} error/fatal lines (run: docker logs $svc | grep -i error)"
    else
      echo -e "  ${G}✓${NC} $svc: clean"
    fi
  fi
done

# ═══ CONNECTIVITY TESTS ═══
header "Service Connectivity"
test_endpoint() {
  local name="$1"
  local url="$2"
  if curl -sf -o /dev/null --max-time 3 "$url"; then
    echo -e "  ${G}✓${NC} $name → $url"
  else
    echo -e "  ${R}✗${NC} $name → $url  (no response)"
  fi
}
test_endpoint "Ollama"      "http://localhost:11434"
test_endpoint "n8n"         "http://localhost:5678"
test_endpoint "Sim.ai"      "http://localhost:3000"
test_endpoint "Qdrant"      "http://localhost:6333"
test_endpoint "pgAdmin"     "http://localhost:5050"

# ═══ SUMMARY ═══
echo ""
echo -e "${BOLD}═════════════════════════════════════════════${NC}"
running=$(docker ps -q | wc -l)
# Known one-shot containers (init/migration runners) that complete and exit normally
oneshot_pattern="^(sim-migrations-1)$"
oneshot_completed=$(docker ps -a --filter "status=exited" --format '{{.Names}}' | grep -cE "$oneshot_pattern" 2>/dev/null || echo 0)
total_stopped=$(docker ps -aq --filter "status=exited" | wc -l)
unexpected_stopped=$((total_stopped - oneshot_completed))
if [ "$unexpected_stopped" -gt 0 ]; then
  echo -e "  Containers: ${G}$running running${NC}, ${R}$unexpected_stopped unexpectedly stopped${NC}, ${DIM}$oneshot_completed one-shot completed${NC}"
else
  echo -e "  Containers: ${G}$running running${NC}, ${DIM}$oneshot_completed one-shot completed${NC}"
fi
echo -e "${BOLD}═════════════════════════════════════════════${NC}"
echo ""

Save with Ctrl+O, Enter, then Ctrl+X.

5
Make All Scripts Executable
bash · make all three scripts executable
chmod +x ~/ai-stack/manage/start-all.sh ~/ai-stack/manage/stop-all.sh ~/ai-stack/manage/diagnose.sh
bash · verify they're executable
ls -la ~/ai-stack/manage/

Expected — should show -rwxr-xr-x permissions:

expected output
-rwxr-xr-x  1 root root  ...  start-all.sh
-rwxr-xr-x  1 root root  ...  stop-all.sh
-rwxr-xr-x  1 root root  ...  diagnose.sh
6
Create Shell Aliases for One-Word Access

Add three aliases to ~/.bashrc so you can run each script with a short command from anywhere:

bash · command 1 — append aliases to .bashrc
cat >> ~/.bashrc << 'ALIASES'

# === AI Lab Stack Management ===
alias ai-start='bash ~/ai-stack/manage/start-all.sh'
alias ai-stop='bash ~/ai-stack/manage/stop-all.sh'
alias ai-reboot='bash ~/ai-stack/manage/stop-all.sh && sleep 3 && bash ~/ai-stack/manage/start-all.sh'
alias ai-doctor='bash ~/ai-stack/manage/diagnose.sh'
alias ai-status='docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"'
alias ai-logs='f() { docker logs "$1" --tail 50 -f; }; f'
ALIASES
bash · command 2 — reload bashrc so aliases work in current session
source ~/.bashrc

Now you have 6 shortcuts:

AliasWhat it does
ai-startStart entire AI stack in correct order (Sim.ai → bridges → Ollama → OpenClaw → n8n → Qdrant → pgAdmin)
ai-stopStop all AI Lab services gracefully in reverse order
ai-rebootFull clean restart — stops everything, waits 3s, starts everything back up
ai-doctorFull diagnostic: status, networks, disk, logs, errors, connectivity
ai-statusQuick one-line container status (no diagnostics)
ai-logs CONTAINERTail logs of a specific container (e.g. ai-logs n8n)
7
Test the Aliases

Run all three to confirm they work:

bash · test 1 — quick status
ai-status
bash · test 2 — full diagnostic (read the output carefully)
ai-doctor
bash · test 3 — tail a specific container's logs (Ctrl+C to exit)
ai-logs n8n
If ai-doctor shows everything green: your stack is fully operational. If anything shows red (✗) or yellow (⚠️), use ai-logs CONTAINER to investigate that specific service.
⚠️Don't run ai-start right now if your stack is already running — it's safe (each helper checks if container exists), but redundant. Use it after ai-stop or after a server reboot.
8
Daily Workflow Reference
ScenarioCommand
Morning — check everything's healthyai-doctor
Quick "is it up?" checkai-status
After server rebootai-start (containers auto-restart but this re-bridges networks)
Before server maintenanceai-stop
One service acting upai-logs SERVICE_NAME
After installing new serviceEdit start-all.sh to include it, then ai-doctor to verify
Full clean restartai-reboot
💡For new services: when you add a service later (e.g. LangFlow, Weaviate), edit ~/ai-stack/manage/start-all.sh to add it to the start sequence, and the services=(...) array in ~/ai-stack/manage/diagnose.sh so it appears in the health check.
🌐
Domain & HTTPS Setup
// Map your AWS-purchased domain to Hostinger · Auto-HTTPS for every service · Let's Encrypt via Caddy
📋This tab uses pocketcode.in throughout. Swap with your own domain everywhere if different. Same for the server IP 187.127.169.30.
⚠️About AWS Certificate Manager (ACM): The public SSL cert you provisioned in ACM for pocketcode.in and *.pocketcode.in cannot be exported — AWS doesn't allow extracting the private key for use outside AWS services. ACM certs only work with CloudFront, ELB, API Gateway, etc.

Solution: We'll use Let's Encrypt via Caddy — a reverse proxy that automatically requests, installs, and renews free SSL certs. Same trusted-by-browsers result, fully automated, no AWS dependency.

Keep your ACM cert — if you later put CloudFront in front of the server for CDN/DDoS protection, you can use it there. For direct HTTPS on the origin server, Let's Encrypt is the standard approach.
🔒Security warning — going public: Once your services are accessible via public domains, expect bot traffic, scanners, and brute-force attempts within minutes. Mitigations applied below:
  • HTTPS-only — Caddy auto-redirects HTTP → HTTPS
  • Basic auth at the proxy layer for services with no built-in auth (Ollama, Qdrant, OpenRouter proxy)
  • Service-level auth preserved — Sim.ai, n8n, pgAdmin, OpenClaw still require their own login/token on top
Use strong passwords. Consider IP allowlisting later if traffic gets noisy.
1
Plan Your Subdomain Map

Decide which services get a public subdomain. Recommended mapping:

SubdomainServiceInternal targetAuth layer
sim.pocketcode.inSim.aisim-simstudio-1:3000Sim.ai login
sim-realtime.pocketcode.inSim.ai WebSocket / realtime backendsim-realtime-1:3002Internal (used by browser JS only)
chat.pocketcode.inOpen WebUI (Ollama chat)open-webui:8080Open WebUI login (admin signup)
n8n.pocketcode.inn8nn8n:5678n8n login
openclaw.pocketcode.inOpenClaw Dashboardopenclaw:18789Token in URL
pgadmin.pocketcode.inpgAdmin (DB UI)pgadmin:80pgAdmin login
ollama.pocketcode.inOllama APIollama:11434Caddy basic auth ⚠️
qdrant.pocketcode.inQdrant Dashboardqdrant:6333Caddy basic auth ⚠️
openrouter.pocketcode.inOpenRouter Proxy (LiteLLM)openrouter-proxy:4000Caddy basic auth ⚠️
💡About direct PostgreSQL access: The database itself (port 5432) uses its own native TLS protocol, not HTTPS. To connect from DBeaver/TablePlus on your Mac, use pocketcode.in:5432 directly — no Caddy involved. Use pgadmin.pocketcode.in for the web UI.
2
⚠️ Disable DNSSEC in Route 53 (Critical First Step)
⚠️If your domain has DNSSEC enabled in Route 53, you MUST disable it before proceeding. AWS Route 53 occasionally has stale/expired DNSSEC signatures that block Let's Encrypt cert acquisition with cryptic errors like DNSSEC: Signature Expired or DNSSEC: Bogus. There's no workaround — every cert-issuing path validates DNSSEC, including DNS-01 challenges. Disabling DNSSEC takes 5-30 minutes; trying to work around it can waste days.

Step 1 — Check if DNSSEC is enabled:

bash · check DNSSEC status
dig +dnssec pocketcode.in @8.8.8.8 | grep -iE "rrsig|EDE"

If output contains RRSIG records or EDE: 7 (Signature Expired) errors → DNSSEC is enabled and likely broken. Continue below. If output is empty → DNSSEC is already off, skip to Step 3.

Step 2 — Remove the DS record at the registrar (parent zone):

  1. Open Route 53 → Registered domains
  2. Click pocketcode.in
  3. Scroll to DNSSEC keys section
  4. Click the existing key → Delete DS record from the registry
  5. Confirm deletion
💡AWS handles the registry communication automatically since they're your registrar. You just click delete.

Step 3 — Wait for DNS propagation:

AWS will warn about 48-hour TTL. In practice with AWS-registered domains, propagation completes in 5-30 minutes. Monitor with:

bash · monitor — run every few minutes
dig DS pocketcode.in @8.8.8.8 +short
dig +dnssec pocketcode.in @8.8.8.8 | grep -i "rrsig"
dig +dnssec pocketcode.in @8.8.8.8 | grep -i "EDE"

When all three return empty → safe to proceed.

Step 4 — Disable DNSSEC signing in Route 53:

  1. Open Route 53 → Hosted zones → click pocketcode.in
  2. Click the DNSSEC signing tab
  3. Click Disable DNSSEC signing
  4. Select Parent zone (since the DS was at the .in registry)
  5. Check the affirmation box, type disable, click Disable
DNSSEC is now fully off. You won't need it back — for personal/dev setups, the only thing it protects against is DNS cache poisoning, which TLS already mitigates at the application layer.
3
Configure DNS in AWS Route 53

Point your domain at your Hostinger server. Since you bought the domain through AWS, DNS is managed in Route 53.

  1. Sign in to AWS Route 53 console
  2. Click Hosted zones → click pocketcode.in
  3. Click Create record, fill in:

Record 1 — root domain:

FieldValue
Record name(leave empty)
Record typeA
Value187.127.169.30
TTL300 (5 minutes — short for testing, raise later)

Record 2 — wildcard for all subdomains:

FieldValue
Record name*
Record typeA
Value187.127.169.30
TTL300

Click Create records. The wildcard *.pocketcode.in means every subdomain (sim, n8n, ollama, etc.) automatically points to your server — no need to create individual records.

✏️Replace: 187.127.169.30 with your Hostinger server IP if different.
4
Verify DNS Propagation

DNS changes typically propagate within 5–30 minutes. Test from your Mac (or server) before moving on:

bash · command 1 — check root domain
dig +short pocketcode.in
bash · command 2 — check a subdomain (wildcard)
dig +short sim.pocketcode.in

Both should return your server IP:

expected output
187.127.169.30
⚠️If you get an empty response or different IP, wait a few more minutes and retry. You can also check propagation worldwide at dnschecker.org.
💡Don't skip this step! Caddy will fail to issue SSL certs if DNS isn't pointing at your server yet (Let's Encrypt does an HTTP challenge that requires reaching your server via the domain).
5
Open Firewall Ports 80 and 443
bash · open HTTP and HTTPS ports
ufw allow 80 && ufw allow 443

Port 80 is required for Let's Encrypt's HTTP-01 challenge (cert acquisition). Port 443 is HTTPS.

bash · verify both are open
ufw status | grep -E "^(80|443)"

Expected — both ports listed as ALLOW:

expected output
80                         ALLOW       Anywhere
443                        ALLOW       Anywhere
80 (v6)                    ALLOW       Anywhere (v6)
443 (v6)                   ALLOW       Anywhere (v6)
6
Create AWS IAM User for Route 53 DNS Challenge
💡Why DNS-01 instead of HTTP-01? Let's Encrypt offers two ways to prove you own a domain: serve a file via HTTP (HTTP-01), or create a DNS TXT record (DNS-01). We use DNS-01 because it:
  • Works without an HTTP listener (firewall-friendly)
  • Can issue wildcard certs like *.pocketcode.in if needed later
  • Doesn't depend on inbound port 80 reachability from Let's Encrypt's servers
Caddy needs API access to Route 53 to programmatically create/delete TXT records during cert acquisition. We'll create a dedicated IAM user with minimal permissions.

Step 1 — Create the IAM policy:

  1. Open IAM → Policies → Create policy
  2. Click the JSON tab
  3. Replace contents with:
json · IAM policy for Caddy Route 53 access
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Route53Caddy",
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZonesByName",
        "route53:GetChange",
        "route53:ChangeResourceRecordSets",
        "route53:ListResourceRecordSets"
      ],
      "Resource": "*"
    }
  ]
}
  1. Click Next
  2. Policy name: CaddyRoute53DNSChallenge
  3. Click Create policy

Step 2 — Create the IAM user:

  1. Open IAM → Users → Create user
  2. Username: caddy-route53-dns
  3. Click Next
  4. Select Attach policies directly
  5. Search for and check CaddyRoute53DNSChallenge
  6. Click NextCreate user

Step 3 — Generate access keys:

  1. Click into the user you just created
  2. Click the Security credentials tab
  3. Scroll to Access keysCreate access key
  4. Select Application running outside AWS
  5. Click NextCreate access key
  6. COPY BOTH values immediately — Access key ID + Secret access key. The secret is only shown ONCE.
🔒Treat these credentials like passwords. Save them in a password manager. If exposed, anyone could modify your Route 53 TXT records (though the impact is limited thanks to the minimal policy).
7
Generate Basic Auth Hashes

Services without built-in auth (Ollama, Qdrant, OpenRouter Proxy) need a password layer at Caddy. Generate a bcrypt hash for your chosen password:

bash · generate password hash (replace PASSWORD with your chosen password)
docker run --rm caddy:latest caddy hash-password --plaintext "PASSWORD"
✏️Replace: PASSWORD with a strong password (12+ chars, mix of letters/numbers/symbols). Use the same password for all three services or generate three hashes for different passwords.

Expected output — a bcrypt hash starting with $2a$14$:

expected output
$2a$14$HASHEDPASSWORDLOOKSLIKETHISLONGSTRING.OfNumbersAndLetters/AbCdEfG
⚠️Copy the hash — you'll paste it into the Caddyfile in the next step. The hash is safe to share (you can't reverse it to the password), but the password itself goes nowhere — only the hash.
8
Build Custom Caddy Image with Route 53 Plugin

The official Caddy Docker image doesn't include the Route 53 DNS plugin. We build a custom image with it baked in. This is a 2-minute build using Caddy's official xcaddy tool.

bash · step 1 — create the Dockerfile
mkdir -p ~/ai-stack/caddy && nano ~/ai-stack/caddy/Dockerfile
dockerfile · step 2 — paste this, then Ctrl+O save, Ctrl+X exit
FROM caddy:builder AS builder
RUN xcaddy build --with github.com/caddy-dns/route53

FROM caddy:latest
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
bash · step 3 — build the custom image (takes ~2 min)
cd ~/ai-stack/caddy && docker build -t caddy-route53:latest .

Expected output ends with:

expected output
Successfully tagged caddy-route53:latest

Verify the plugin is included:

bash · check plugin is registered
docker run --rm caddy-route53:latest caddy list-modules | grep route53

Should print dns.providers.route53 — confirms the plugin is loaded.

9
Create the Caddyfile
bash · open Caddyfile in nano
nano ~/ai-stack/caddy/Caddyfile
caddyfile · paste this, then Ctrl+O save, Ctrl+X exit
{
    email your-email@example.com
    acme_dns route53 {
        max_retries 10
    }
}

# Landing page (optional)
pocketcode.in {
    respond "AI Lab — services: sim, n8n, openclaw, pgadmin, ollama, qdrant, openrouter" 200
}

# Sim.ai — workflow canvas
sim.pocketcode.in {
    reverse_proxy sim-simstudio-1:3000
}

# Sim.ai realtime — WebSocket backend (used by /workspace for live collaboration)
sim-realtime.pocketcode.in {
    reverse_proxy sim-realtime-1:3002
}

# Open WebUI — ChatGPT-style frontend for Ollama
chat.pocketcode.in {
    reverse_proxy open-webui:8080
}

# n8n — workflow automation
n8n.pocketcode.in {
    reverse_proxy n8n:5678
}

# OpenClaw — Control UI
openclaw.pocketcode.in {
    reverse_proxy openclaw:18789
}

# pgAdmin — PostgreSQL UI
pgadmin.pocketcode.in {
    reverse_proxy pgadmin:80
}

# Ollama — gated with basic auth
ollama.pocketcode.in {
    basic_auth {
        admin PASTE_BCRYPT_HASH_HERE
    }
    reverse_proxy ollama:11434
}

# Qdrant — gated with basic auth
qdrant.pocketcode.in {
    basic_auth {
        admin PASTE_BCRYPT_HASH_HERE
    }
    reverse_proxy qdrant:6333
}

# OpenRouter proxy (LiteLLM) — gated with basic auth
openrouter.pocketcode.in {
    basic_auth {
        admin PASTE_BCRYPT_HASH_HERE
    }
    reverse_proxy openrouter-proxy:4000
}
💡The acme_dns route53 directive in the global block tells Caddy to use DNS-01 challenge (Route 53 API) for all certs instead of HTTP-01. This works with broken/disabled DNSSEC and produces wildcard-capable certs.
✏️Replace before saving:
PlaceholderWhat to putWhere to get it
your-email@example.comYour emailUsed for Let's Encrypt notifications (expiry warnings)
pocketcode.inYour domainAppears 8 times — replace ALL if different
PASTE_BCRYPT_HASH_HEREThe bcrypt hash from Step 7Output of caddy hash-password (appears 3 times — same hash OR different ones)
adminUsername for basic authPick any username — appears 3 times
💡Unify the 3 basic-auth hashes to remember one password instead of three. Generate one hash, paste it in all three places. The 3 services (ollama, qdrant, openrouter) then share admin / your-password.
10
Create the Run Script
bash · step 1 — create the script
touch ~/ai-stack/caddy/run-caddy.sh && nano ~/ai-stack/caddy/run-caddy.sh
bash · step 2 — paste this, then Ctrl+O save, Ctrl+X exit
#!/bin/bash
docker run -d \
  --name caddy \
  --network ai-stack \
  --restart unless-stopped \
  -p 80:80 \
  -p 443:443 \
  -p 443:443/udp \
  -e AWS_ACCESS_KEY_ID="PASTE_ACCESS_KEY_ID" \
  -e AWS_SECRET_ACCESS_KEY="PASTE_SECRET_ACCESS_KEY" \
  -e AWS_REGION="us-east-1" \
  -v ~/ai-stack/caddy/Caddyfile:/etc/caddy/Caddyfile \
  -v caddy_data:/data \
  -v caddy_config:/config \
  caddy-route53:latest
✏️Replace before saving:
PlaceholderWhat to put
PASTE_ACCESS_KEY_IDAccess key ID from Step 6
PASTE_SECRET_ACCESS_KEYSecret access key from Step 6
💡Port 443/udp enables HTTP/3 (QUIC) — faster page loads. Image caddy-route53:latest is the custom image you built in Step 8. AWS credentials are needed by the Route 53 plugin for DNS-01 challenge.

Save with Ctrl+O, Enter, Ctrl+X.

11
Launch Caddy & Watch Cert Acquisition
bash · command 1 — make executable and run
chmod +x ~/ai-stack/caddy/run-caddy.sh && bash ~/ai-stack/caddy/run-caddy.sh
bash · command 2 — connect Caddy to Sim.ai's network
docker network connect sim_default caddy 2>/dev/null || echo "Already connected — OK"

Caddy needs to reach sim-simstudio-1 which is on the sim_default network. The rest of the services (n8n, openclaw, pgadmin, ollama, qdrant, openrouter-proxy) are on ai-stack.

bash · command 3 — watch Caddy acquire SSL certs (Ctrl+C to exit)
docker logs caddy -f

You should see Caddy requesting and obtaining certs for each subdomain — takes ~30-60 seconds:

expected log output (excerpt)
"trying to solve challenge" identifier=sim.pocketcode.in challenge_type=dns-01
"certificate obtained successfully" identifier=sim.pocketcode.in
"certificate obtained successfully" identifier=n8n.pocketcode.in
...

Press Ctrl+C to stop tailing logs once you see all certs obtained.

💡Don't panic if you see early "Incorrect TXT record" errors. If Caddy left stale TXT records from a previous attempt in Route 53, Let's Encrypt will reject the first try → Caddy creates fresh records → second attempt succeeds within ~30 seconds. Self-healing behavior.
🔄ZeroSSL fallback: Caddy tries Let's Encrypt first. If LE has rate-limited your account from prior failed attempts, Caddy automatically falls back to ZeroSSL (also free, fully trusted). You may see some certs from LE and others from ZeroSSL — both are equally valid.
⚠️If cert acquisition fails: Verify DNSSEC is fully off (Step 2 verification commands), check AWS credentials in the run script, and check the IAM user has the policy attached.
12
Test Each Subdomain in Browser

Open each in Chrome — you should get a green padlock 🔒 and the service should load:

URLWhat you should see
https://sim.pocketcode.inSim.ai login page
https://sim-realtime.pocketcode.in/socket.io/Should return JSON error (Socket.IO expects WS handshake — HTTP 400 means reachable ✓)
https://chat.pocketcode.inOpen WebUI sign-up / sign-in
https://n8n.pocketcode.inn8n workflow canvas (or login)
https://openclaw.pocketcode.in/#token=YOUR_TOKEN"Device pairing required" — handled in Step 13
https://pgadmin.pocketcode.inpgAdmin login
https://ollama.pocketcode.inBrowser prompts for username/password → "Ollama is running"
https://qdrant.pocketcode.in/dashboardBrowser prompts for password → Qdrant UI
https://openrouter.pocketcode.in/healthBrowser prompts for password → JSON health response

Or quick CLI test for all subdomains at once:

bash · quick HTTPS test from server
for sub in sim sim-realtime chat n8n openclaw pgadmin ollama qdrant openrouter; do
  echo -n "$sub.pocketcode.in: "
  curl -sI -o /dev/null -w "%{http_code}\n" "https://$sub.pocketcode.in"
done

Expected — all should return 200, 302, or 401 (auth required is also OK):

expected output
sim.pocketcode.in: 200
sim-realtime.pocketcode.in: 400
chat.pocketcode.in: 200
n8n.pocketcode.in: 200
openclaw.pocketcode.in: 200
pgadmin.pocketcode.in: 302
ollama.pocketcode.in: 401
qdrant.pocketcode.in: 401
openrouter.pocketcode.in: 401
🔁Browser caches basic auth aggressively. If you mistyped a password earlier and Chrome cached it, you'll keep failing silently. Two fixes:
  • Test in incognito — clean slate, fresh auth prompt
  • Verify password from CLI: curl -u username:password https://ollama.pocketcode.in/ — should return "Ollama is running"
If CLI works but browser doesn't, clear site data for the subdomain in chrome://settings/clearBrowserData.
13
First Visit to OpenClaw — Approve Device Pairing
🔐OpenClaw has built-in device pairing — each new origin (browser + URL combination) requires explicit approval from the Gateway host. When you visit https://openclaw.pocketcode.in for the first time, you'll see "Device pairing required" with a request ID. This is normal security behavior, not an error.

What you'll see in browser:

openclaw error message
Device pairing required
This browser needs one-time approval from the Gateway host before it can use the Control UI.

1. Run openclaw devices list on the Gateway host.
2. Approve this request: openclaw devices approve REQUEST_ID
3. Reconnect after the approval completes.

Step 1 — Get the pending request ID:

bash · list pending pairing requests
docker exec -it openclaw node /app/openclaw.mjs devices list

Look at the Pending table — note the Request ID (long UUID like a9f96054-a981-408d-87c4-e33214fb5a37).

💡The request ID shown in your browser might be stale if the browser tab reconnected after the original error. Always use the ID from devices list — that's the live one.

Step 2 — Approve the request:

bash · approve device (replace UUID with yours)
docker exec -it openclaw node /app/openclaw.mjs devices approve a9f96054-a981-408d-87c4-e33214fb5a37
✏️Replace: a9f96054-a981-408d-87c4-e33214fb5a37 with the request ID from your devices list output.

Step 3 — Reload the OpenClaw page:

Refresh https://openclaw.pocketcode.in/#token=YOUR_TOKEN in your browser. The Control UI loads.

🔁One-time per browser, per origin. Future logins from the same browser won't re-prompt. New device (phone, another laptop) → new pairing required. To inspect paired devices later: docker exec openclaw node /app/openclaw.mjs devices list
🔖Bookmark this for one-click access: https://openclaw.pocketcode.in/chat#token=YOUR_TOKEN — the URL fragment auto-fills the token and lands you directly in chat.
14
Update n8n to Use the New Domain

n8n's N8N_HOST, WEBHOOK_URL, and N8N_PROTOCOL are baked into the container at creation. Update the run script:

bash · step 1 — edit run-n8n.sh
nano ~/ai-stack/n8n/run-n8n.sh

Find and update these 3 environment variables:

Old valueNew value
-e N8N_HOST=187.127.169.30 \-e N8N_HOST=n8n.pocketcode.in \
-e N8N_PROTOCOL=http \-e N8N_PROTOCOL=https \
-e WEBHOOK_URL=http://187.127.169.30:5678/ \-e WEBHOOK_URL=https://n8n.pocketcode.in/ \

Save (Ctrl+O, Enter, Ctrl+X), then recreate the container:

bash · step 2 — restart n8n with new env
docker stop n8n && docker rm n8n && bash ~/ai-stack/n8n/run-n8n.sh
💡n8n's encryption key is preserved (it's stored in the n8n-data volume, not the container). All your workflows and credentials remain intact.
15
Update Sim.ai to Use the New Domain

Sim.ai needs several env vars updated: BETTER_AUTH_URL for auth callbacks, and NEXT_PUBLIC_SOCKET_URL for the workspace's WebSocket connection.

⚠️Critical for /workspace to load: Sim.ai's workspace page uses a separate realtime container (sim-realtime-1 on port 3002) for live collaboration. Without NEXT_PUBLIC_SOCKET_URL pointing to a publicly accessible HTTPS endpoint, the page loads completely blank with no visible error — diagnostics still pass since containers are healthy. The fix is the sim-realtime.pocketcode.in subdomain you added in Step 9.
bash · step 1 — edit .env
nano ~/ai-stack/sim/.env

Update these lines (add them if missing):

env · update or add
BETTER_AUTH_URL=https://sim.pocketcode.in
NEXT_PUBLIC_APP_URL=https://sim.pocketcode.in
NEXTAUTH_URL=https://sim.pocketcode.in

# WebSocket realtime — two different URLs for two different audiences:
SOCKET_SERVER_URL=http://realtime:3002
NEXT_PUBLIC_SOCKET_URL=https://sim-realtime.pocketcode.in
💡Why two SOCKET URLs?
  • SOCKET_SERVER_URL — used by the Sim.ai backend to talk to the realtime container. They're both in the same Docker network, so internal Docker DNS (http://realtime:3002) is the fastest path. No TLS overhead, no external hop.
  • NEXT_PUBLIC_SOCKET_URL — baked into the browser JS bundle. Browsers can't resolve internal Docker hostnames, so they need a public HTTPS URL. The NEXT_PUBLIC_* prefix tells Next.js to expose this env var to client-side code.

Save (Ctrl+O, Enter, Ctrl+X), then recompose Sim.ai (full down + up required since NEXT_PUBLIC_* vars are read at startup):

bash · step 2 — recompose Sim.ai stack
cd ~/ai-stack/sim && docker compose -f docker-compose.prod.yml down && docker compose -f docker-compose.prod.yml up -d
docker network connect ai-stack sim-db-1 2>/dev/null
docker restart n8n

Verify the env made it in:

bash · verify
docker exec sim-simstudio-1 env | grep SOCKET

Expected:

expected output
SOCKET_SERVER_URL=http://realtime:3002
NEXT_PUBLIC_SOCKET_URL=https://sim-realtime.pocketcode.in
🔁Browser cache warning: If you previously accessed Sim.ai via IP (http://YOUR_IP:3000) or SSH tunnel (http://127.0.0.1:3000), the browser has stale cookies, localStorage, and an old JS bundle cached with NEXT_PUBLIC_SOCKET_URL="". After this update, you'll likely see a blank /workspace page in your regular browser even though everything is configured correctly. Two fixes: (a) test in incognito window to confirm the server is working, then (b) in your regular browser, F12 → Application → Storage → Clear site data, then refresh and log in again. One-time cleanup.
⚠️The down && up sequence may take 30-60 seconds. Don't refresh sim.pocketcode.in until all containers show healthy.
16
Update Management Scripts for HTTPS

Now that Caddy is in the picture, the management scripts need three updates:

  1. start-all.sh — launch Caddy, reconnect networks, print HTTPS URLs as primary access
  2. diagnose.sh — include Caddy in container/error scans, filter transient DNS errors
  3. Both — handle the sim-db-1 network drop that happens on Sim.ai recreate

Update 1 — Add Caddy launch + network bridge to start-all.sh:

bash · edit start-all.sh
nano ~/ai-stack/manage/start-all.sh

Find this block:

existing code — find this
# 11. Start pgAdmin
log "Starting pgAdmin..."
start_or_create pgadmin ~/ai-stack/run-pgadmin.sh

Add this block right after it:

bash · paste after Step 11
# 12. Start Caddy (HTTPS reverse proxy)
log "Starting Caddy..."
start_or_create caddy ~/ai-stack/caddy/run-caddy.sh

# Reconnect Caddy to sim_default after Sim.ai compose may have recreated network
docker network connect sim_default caddy 2>/dev/null \
  && ok "caddy connected to sim_default" || ok "caddy already on sim_default"

Update 2 — Replace the BROWSER URLS section with HTTPS-aware output:

Find the section starting with # ═══ SSH TUNNEL COMMANDS ═══ at the bottom of the script. Delete everything from that line to the end of the file, then paste:

bash · paste at end of start-all.sh
# ═══ HTTPS PUBLIC URLS (primary access path) ═══
echo ""
echo -e "${BOLD}╔════════════════════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║   PUBLIC HTTPS URLs — Open in any browser              ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "  ${G}Sim.ai${NC}             →  https://sim.pocketcode.in"
echo -e "  ${G}Open WebUI${NC}         →  https://chat.pocketcode.in"
echo -e "  ${G}n8n${NC}                →  https://n8n.pocketcode.in"
echo -e "  ${G}pgAdmin${NC}            →  https://pgadmin.pocketcode.in"
if [ -n "$OPENCLAW_TOKEN" ]; then
  echo -e "  ${G}OpenClaw${NC}           →  https://openclaw.pocketcode.in/chat#token=${OPENCLAW_TOKEN}"
else
  echo -e "  ${G}OpenClaw${NC}           →  https://openclaw.pocketcode.in/#token=${R}TOKEN_NOT_FOUND${NC}"
fi
echo ""
echo -e "${BOLD}🔑 Basic auth required (username: admin):${NC}"
echo ""
echo -e "  ${Y}Ollama API${NC}         →  https://ollama.pocketcode.in"
echo -e "  ${Y}Qdrant Dashboard${NC}   →  https://qdrant.pocketcode.in/dashboard"
echo -e "  ${Y}OpenRouter Proxy${NC}   →  https://openrouter.pocketcode.in"

# ═══ FALLBACK: SSH TUNNELS (only if HTTPS down) ═══
echo ""
echo -e "${BOLD}╔════════════════════════════════════════════════════════╗${NC}"
echo -e "${BOLD}║   FALLBACK — SSH tunnels (only if Caddy/DNS down)      ║${NC}"
echo -e "${BOLD}╚════════════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${DIM}# All tunnels in one command — keep terminal open${NC}"
echo -e "${G}  ssh -N \\"
echo "    -L 5678:127.0.0.1:5678 \\"
echo "    -L 18789:127.0.0.1:18789 \\"
echo -e "    root@${SERVER_IP}${NC}"
echo ""
echo -e "${DIM}# Then in browser:${NC}"
echo -e "  http://127.0.0.1:5678                       ${DIM}(n8n)${NC}"
echo -e "  http://127.0.0.1:18789/#token=${OPENCLAW_TOKEN:-TOKEN}  ${DIM}(OpenClaw)${NC}"
echo -e "  http://${SERVER_IP}:3000                    ${DIM}(Sim.ai)${NC}"
echo -e "  http://${SERVER_IP}:5050                    ${DIM}(pgAdmin)${NC}"

echo ""
echo -e "${BOLD}═════════════════════════════════════════════════════════${NC}"
echo -e "${G}✓ Stack ready. Run 'ai-doctor' for full diagnostics.${NC}"
echo ""

Save (Ctrl+O, Enter, Ctrl+X).

Update 3 — Add Caddy to diagnose.sh services array:

bash · edit diagnose.sh
nano ~/ai-stack/manage/diagnose.sh

Find this line:

existing — find this
services=(ollama openclaw n8n openrouter-proxy qdrant pgadmin sim-db-1 sim-redis-1 sim-realtime-1 sim-simstudio-1 open-webui)

Replace with (add caddy at the end):

bash · replace with
services=(ollama openclaw n8n openrouter-proxy qdrant pgadmin sim-db-1 sim-redis-1 sim-realtime-1 sim-simstudio-1 open-webui caddy)

Update 4 — Filter transient DNS errors in diagnose.sh error scan:

💡Why this filter is needed: When Sim.ai compose recreates sim-db-1, there's a brief window where n8n can't resolve the hostname → throws Error: getaddrinfo EAI_AGAIN sim-db-1 → log line lingers in the 100-line buffer. This is a transient race condition that recovers automatically once the network bridge re-establishes (handled in start-all.sh Step 3). The error is benign and should be filtered.

Still in diagnose.sh, Ctrl+W, type errors=, Enter. Find this long line:

existing — find this (one long line)
    errors=$(docker logs "$svc" --tail 100 2>&1 | grep -iE "(^|[[:space:]])(error|fatal|panic|exception)[: ]" | grep -ivE "no error|0 error|errorlevel|error_log|error-level|no such file|relation \".*\" does not exist|database \".*\" does not exist|role \".*\" does not exist|duplicate key|invalid input syntax for type vector|terminating connection due to administrator command|Failed to load model catalog" | wc -l)

Replace with (adds |getaddrinfo EAI_AGAIN to the filter):

bash · replace with
    errors=$(docker logs "$svc" --tail 100 2>&1 | grep -iE "(^|[[:space:]])(error|fatal|panic|exception)[: ]" | grep -ivE "no error|0 error|errorlevel|error_log|error-level|no such file|relation \".*\" does not exist|database \".*\" does not exist|role \".*\" does not exist|duplicate key|invalid input syntax for type vector|terminating connection due to administrator command|Failed to load model catalog|getaddrinfo EAI_AGAIN" | wc -l)

Save (Ctrl+O, Enter, Ctrl+X).

Update 5 — Test everything:

bash · verify everything is green
ai-doctor

You should see:

  • Caddy listed in Container Status as UP
  • Caddy on both ai-stack AND sim_default networks
  • All services with green ✓ clean in Error Scan
  • All connectivity checks green
⚠️Manual Sim.ai restarts still drop sim-db-1 from ai-stack. Whenever you run docker compose -f docker-compose.prod.yml down/up directly (instead of ai-start), follow up with: docker network connect ai-stack sim-db-1 && docker restart n8n. ai-start handles this automatically via Step 3 of the script.
17
Final URL Reference & Daily Use

Your services are now accessible from anywhere with proper HTTPS. Bookmark these:

ServicePublic URL (HTTPS)
Sim.aihttps://sim.pocketcode.in
Open WebUI (Ollama chat)https://chat.pocketcode.in
n8nhttps://n8n.pocketcode.in
OpenClaw Dashboardhttps://openclaw.pocketcode.in/#token=YOUR_TOKEN
pgAdminhttps://pgadmin.pocketcode.in
Ollama APIhttps://ollama.pocketcode.in (basic auth)
Qdrant Dashboardhttps://qdrant.pocketcode.in/dashboard (basic auth)
OpenRouter Proxyhttps://openrouter.pocketcode.in (basic auth)
💡SSH tunnels no longer required. The previous setup needed tunnels for n8n (secure cookie) and OpenClaw (Control UI binding). With HTTPS now in place, both work directly via their public URLs.
🔄Cert renewal: Caddy auto-renews Let's Encrypt certs 30 days before expiry. No manual action needed. You'll get email notifications from Let's Encrypt 20 days before expiry if anything goes wrong.
⚠️Adding a new service later? Edit ~/ai-stack/caddy/Caddyfile to add a new subdomain block, then docker exec caddy caddy reload --config /etc/caddy/Caddyfile — no container restart needed, zero downtime.
📖
Host This Guide
// Serve this guide at setup.pocketcode.in · Caddy static file server · ~5 min setup
💡Why host it? Having this guide accessible at a stable URL means you can reference it from any device (phone, tablet, another laptop) without copying files around. It's also handy for sharing with teammates or future-you when you rebuild the stack 6 months from now. Total setup: 5 minutes.
📦Bundle file: The companion setup-page.zip contains index.html (this guide, renamed) and a brief README.md. You'll upload index.html from this bundle to your server.
1
Prepare Server Folder for the Guide

Create a folder on the server where Caddy will read the HTML from:

bash · on server
mkdir -p ~/ai-stack/setup-page

That's it for the server side prep. Next we upload the file from your Mac.

2
Upload index.html from Your Mac

On your Mac, navigate to wherever you extracted setup-page.zip:

bash · on Mac
cd ~/Downloads/setup-page   # or wherever you extracted
ls -lh index.html           # confirm file exists, ~280KB

Copy it to the server with scp:

bash · on Mac
scp index.html root@187.127.169.30:~/ai-stack/setup-page/index.html
✏️Replace: 187.127.169.30 with your server IP.

Verify on the server:

bash · on server
ls -lh ~/ai-stack/setup-page/index.html

Should show the file at ~280KB.

3
Add Volume Mount to Caddy Run Script

Caddy needs read access to the folder. Edit the run script to add a volume mount:

bash · edit run-caddy.sh
nano ~/ai-stack/caddy/run-caddy.sh

Find this line:

existing — find this
  -v ~/ai-stack/caddy/Caddyfile:/etc/caddy/Caddyfile \

Add this line right after it (between the Caddyfile mount and caddy_data):

bash · add this line
  -v ~/ai-stack/setup-page:/srv/setup-page:ro \

The :ro flag makes the mount read-only — Caddy can serve files but can't modify them. Defense in depth.

Save (Ctrl+O, Enter, Ctrl+X).

4
Generate Basic Auth Hash (Recommended)
🔒Why protect the page? This guide contains your server IP, domain names, and references to your stack configuration. While not catastrophic if leaked (no passwords or keys), it's still a recon target for attackers. Adding basic auth at the proxy layer takes 30 seconds.

Generate a hash with your chosen password:

bash · generate hash (replace PASSWORD)
docker run --rm caddy:latest caddy hash-password --plaintext "PASSWORD"

Or, if you already have a password for ollama/qdrant/openrouter that you can reuse, find the existing hash:

bash · find existing hash
grep -A1 'basic_auth' ~/ai-stack/caddy/Caddyfile | grep -E '^\s+\w+\s+\$2a\$' | head -1

Copy the hash starting with $2a$14$... — you'll paste it in the next step.

💡Want it public instead? Skip this step and omit the basic_auth block in Step 5. The page will be open to anyone with the URL.
5
Add setup.pocketcode.in Block to Caddyfile
bash · edit Caddyfile
nano ~/ai-stack/caddy/Caddyfile

Add this block at the end (or anywhere — order doesn't matter in Caddy):

caddyfile · with basic auth (recommended)
# Setup guide — this very page
setup.pocketcode.in {
    basic_auth {
        admin PASTE_BCRYPT_HASH_HERE
    }
    root * /srv/setup-page
    file_server
    encode gzip
}
✏️Replace:
PlaceholderWhat to put
setup.pocketcode.inYour domain
adminUsername for basic auth (pick any)
PASTE_BCRYPT_HASH_HEREThe hash from Step 4
💡What each directive does:
  • root * /srv/setup-page — serve files from the mounted folder
  • file_server — enable static file serving (auto-serves index.html on the root path)
  • encode gzip — compress responses (~280KB HTML → ~50KB over the wire)
  • basic_auth — gate access at the proxy layer

Save (Ctrl+O, Enter, Ctrl+X).

6
Add DNS Record for setup.pocketcode.in
Already done if you have a wildcard A record. Tab 13 Step 3 set up *.pocketcode.in → server IP, which automatically covers setup.pocketcode.in. Skip to Step 7.

If you only created individual A records, add one more:

  1. Open Route 53 → Hosted zones → pocketcode.in
  2. Click Create record
  3. Record name: setup
  4. Record type: A
  5. Value: 187.127.169.30 (your server IP)
  6. TTL: 300
  7. Click Create records

Wait ~5 minutes for DNS propagation, then verify:

bash · verify DNS
dig +short setup.pocketcode.in

Should return your server IP.

7
Restart Caddy with New Volume Mount

The volume mount change requires a container recreate (not just a config reload):

bash · recreate Caddy with new mount
docker stop caddy && docker rm caddy
bash ~/ai-stack/caddy/run-caddy.sh
docker network connect sim_default caddy 2>/dev/null

Watch Caddy acquire the new cert:

bash · watch logs
docker logs caddy -f 2>&1 | grep -E "setup.pocketcode|certificate obtained"

Within ~30-60 seconds you should see:

expected output
"certificate obtained successfully" identifier=setup.pocketcode.in

Press Ctrl+C to exit the log tail.

💡Existing certs (sim, n8n, openclaw, etc.) persist in the caddy_data volume — they're not re-issued. Only the new setup.pocketcode.in cert is acquired.
8
Visit setup.pocketcode.in

Open in your browser:

URL
https://setup.pocketcode.in

Expected flow:

  1. Browser shows green padlock 🔒
  2. Basic auth prompt appears — enter admin + your password
  3. The guide loads — same interactive tabs as the local file

Quick CLI test from server (with your credentials):

bash · CLI test
curl -u admin:YOUR_PASSWORD -sI https://setup.pocketcode.in/ | head -5

Should return HTTP/2 200.

🔁Browser caches basic auth aggressively. If the wrong password gets cached, you'll fail silently. Test in incognito mode to bypass cache.
9
Updating the Guide Later

When you make edits to the guide (or get an updated version), upload the new file — no Caddy restart needed:

bash · on Mac — re-upload
scp index.html root@187.127.169.30:~/ai-stack/setup-page/index.html

Hard-refresh your browser (Cmd+Shift+R on Mac, Ctrl+Shift+R on Windows) to bypass cache. New version appears immediately.

💡Pro move — create an upload alias on your Mac. Add this to ~/.zshrc: alias push-guide='scp ~/Downloads/setup-page/index.html root@187.127.169.30:~/ai-stack/setup-page/index.html'. Then just run push-guide any time you want to update.
10
(Optional) Public Mode — No Auth

If you decide later to make the guide fully public (no basic auth), edit the Caddyfile:

bash
nano ~/ai-stack/caddy/Caddyfile

Remove the basic_auth block from the setup.pocketcode.in section so it looks like:

caddyfile · public mode
setup.pocketcode.in {
    root * /srv/setup-page
    file_server
    encode gzip
}

Save, then reload (no container restart needed):

bash
docker exec caddy caddy reload --config /etc/caddy/Caddyfile
⚠️Before making public, review the guide once more for any leftover sensitive info — server IPs, internal hostnames, password hints, etc. Then it's safe to share with the world.