O lado AI da Dual Architecture: Ollama (LLMs locais) + Supabase self-hosted + Caddy reverso. Docker Compose puro, controle total, zero dependência de painel.
ssh ghost@SEU_IP_BRAIN # ou mesma VPS se single-host
mkdir -p ~/brain7/{ollama,supabase,caddy}
cd ~/brain7
cat > ~/brain7/docker-compose.yml << 'EOF'
services:
ollama:
image: ollama/ollama:latest
container_name: brain7-ollama
restart: unless-stopped
volumes:
- ./ollama:/root/.ollama
ports:
- "127.0.0.1:11434:11434"
environment:
- OLLAMA_HOST=0.0.0.0
- OLLAMA_KEEP_ALIVE=24h
caddy:
image: caddy:2-alpine
container_name: brain7-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
caddy_config:
EOF
cat > ~/brain7/caddy/Caddyfile << 'EOF'
ollama.ghostlab77.com.br {
reverse_proxy brain7-ollama:11434
encode gzip
}
brain.ghostlab77.com.br {
reverse_proxy brain7-supabase-kong:8000
encode gzip
}
EOF
cd ~/brain7
docker compose up -d
# Aguarda containers subirem (~10s)
docker compose ps
# Puxa modelos locais (pode demorar 3-10 min cada)
docker exec brain7-ollama ollama pull gemma2:2b
docker exec brain7-ollama ollama pull llama3.1:8b
docker exec brain7-ollama ollama pull nomic-embed-text
# Teste rápido:
curl http://localhost:11434/api/generate -d '{
"model": "gemma2:2b",
"prompt": "Diga oi em portugues",
"stream": false
}'
Supabase é o "Firebase open-source" — Postgres + Auth + Storage + Realtime + Edge Functions + Kong gateway. Self-host te dá controle total sem subscription mensal.
cd ~/brain7 git clone --depth 1 https://github.com/supabase/supabase cp -r supabase/docker supabase-self cd supabase-self cp .env.example .env # IMPORTANTE: edita .env e troca TODAS as senhas default nano .env # Altere: POSTGRES_PASSWORD, JWT_SECRET, ANON_KEY, SERVICE_ROLE_KEY, DASHBOARD_USERNAME, DASHBOARD_PASSWORD
https://supabase.com/docs/guides/self-hosting/docker#generate-api-keys. NÃO use defaults — são públicos e dão acesso admin.
docker compose pull docker compose up -d # Aguarda ~2 min (primeira vez baixa muita coisa) docker compose ps # Dashboard em: # http://SEU_IP:8000 # ou via Caddy proxy depois: # https://brain.ghostlab77.com.br
O brain serve os apps ops. Exemplo: workflow n8n na VPS Ops chama Ollama da VPS Brain:
# No n8n, node HTTP Request:
URL: https://ollama.ghostlab77.com.br/api/generate
Method: POST
Body JSON:
{
"model": "gemma2:2b",
"prompt": "{{ $json.input }}",
"stream": false
}
Pronto: seu workflow n8n tem acesso a LLM local sem pagar OpenAI.
Módulo 07 — Cloudflare + SSL: ativar proxy (laranja), Full Strict SSL, Page Rules e Worker routing pros 6 subdomínios.