| Resource | Hostinger KVM 8 | Why |
|---|---|---|
| vCPU | 8 cores AMD EPYC | Parallel container workloads |
| RAM | 32 GB | Sim.ai alone needs 12 GB |
| Storage | 400 GB NVMe | LLM models + DB growth |
| OS | Ubuntu 24.04 LTS (Noble Numbat) | cgroup v2 · best Docker support |
During setup select Ubuntu 24.04 LTS as the OS. After provisioning, note your server's public IP from the hPanel dashboard.
ssh root@YOUR_SERVER_IP # Verify Ubuntu version lsb_release -a # Expected: Ubuntu 24.04.x LTS # Confirm cgroup v2 (Ubuntu 24.04 default) stat -fc %T /sys/fs/cgroup # Expected output: cgroup2fs ✓
| Placeholder | What to put | Where to find it |
|---|---|---|
| YOUR_SERVER_IP | Your Hostinger server's public IP address | hPanel → VPS section → click your server → IP shown at top of dashboard (e.g. 82.115.x.x) |
apt update && apt upgrade -y apt install -y curl wget git nano ufw fail2ban htop
# Official Docker install — works perfectly on Ubuntu 24.04 curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh apt install -y docker-compose-plugin # Verify docker --version docker compose version
One shared Docker bridge network lets all containers talk to each other by name — completely internal to the server, no extra Hostinger networking needed. Run each command separately:
docker network create ai-stack
mkdir -p ~/ai-stack/{ollama,claude-code,openclaw,sim}
echo "✅ Network and folders ready"
Run each command separately:
ufw allow OpenSSH
ufw allow 8080
ufw allow 3000
ufw allow 11434
ufw --force enable && ufw status
ufw allow from YOUR_IP to any port 11434This is a multi-line command. Create a script file, paste the content, make it executable, then run it:
touch ~/ai-stack/ollama/run-ollama.sh
nano ~/ai-stack/ollama/run-ollama.sh
#!/bin/bash docker run -d \ --name ollama \ --network ai-stack \ --restart unless-stopped \ -v ollama-data:/root/.ollama \ -p 11434:11434 \ ollama/ollama
chmod +x ~/ai-stack/ollama/run-ollama.sh
bash ~/ai-stack/ollama/run-ollama.sh
docker exec -it ollama ollama pull llama3.2
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Hello World!"
}'
docker exec -it ollama ollama list
http://ollama:11434 via the ai-stack network.| Model | VRAM | Best for |
|---|---|---|
| llama3.2:3b | ~2 GB | Fast responses, low resource |
| llama3.2:8b | ~5 GB | Good quality general use |
| mistral:7b | ~5 GB | Coding + reasoning |
| deepseek-r1:8b | ~6 GB | Strong reasoning tasks |
| gemma2:9b | ~7 GB | Curriculum & education content |