This guide covers deploying the Orchestra backend using Docker. For local development, see the project root docs in ../README.md.
For hot-reload local development, use the dedicated dev stack instead of the production-oriented compose flow.
cd ..
BACKEND_ENV_FILE=$HOME/.env/orchestra/.env.backend \
FRONTEND_ENV_FILE=$HOME/.env/orchestra/.env.frontend \
make dev.docker.upThis starts:
- FastAPI backend with reload on
http://localhost:8000 - Vite frontend with reload on
http://localhost:5173 - TaskIQ worker, PostgreSQL, Redis, SearXNG
- Dozzle log viewer on
http://localhost:8088
Useful commands:
make dev.docker.logs
make dev.docker.ps
make dev.docker.downIf you need the sandbox exec server in the same network, enable the optional profile:
COMPOSE_PROFILES=tools make dev.docker.up- 📋 Prerequisites
- 🚀 Quick Start
- 🧩 Docker Compose Services
- 🧱 Docker Compose Example
- 🏗️ Build Commands
- ⚙️ Environment Variables
- 🗄️ Database Migrations
- 🚢 Production Considerations
- 🧰 Troubleshooting
- Docker installed
- Docker Compose installed
- Access to AI provider API keys (OpenAI, Anthropic, etc.)
Pull the latest image from GitHub Container Registry:
docker pull ghcr.io/ruska-ai/orchestra:latestCreate a .env.docker file in the backend/ directory:
cd backend
cp .example.env .env.dockerUpdate the following values for Docker networking:
# Database - use container name instead of localhost
POSTGRES_CONNECTION_STRING="postgresql://admin:test1234@postgres:5432/orchestra?sslmode=disable"
# Tools - use container names for internal services
SEARX_SEARCH_HOST_URL="http://search_engine:8080"
SHELL_EXEC_SERVER_URL="http://exec_server:3005/exec"From the project root directory:
# Start database and backend
docker compose up postgres orchestra
# Or start all services
docker compose upThe API will be available at http://localhost:8000
- API Docs:
http://localhost:8000/docs - Health Check:
http://localhost:8000/health
| Service | Port | Description |
|---|---|---|
orchestra |
8000 | Backend API |
postgres |
5432 | PostgreSQL with pgvector |
minio |
9000/9001 | S3-compatible file storage |
search_engine |
8080 | SearXNG search engine |
exec_server |
3005 | Shell execution server |
ollama |
11434 | Local LLM inference (requires GPU) |
redis |
6379 | Redis message broker (for workers) |
worker |
- | TaskIQ worker (no exposed port) |
services:
# PGVector
postgres:
image: pgvector/pgvector:pg16
container_name: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: test1234
POSTGRES_DB: postgres
ports:
- "5432:5432"
# Server (use pre-built image or build locally)
orchestra:
image: ghcr.io/ruska-ai/orchestra:latest
container_name: orchestra
env_file: .env.docker
ports:
- "8000:8000"
depends_on:
- postgresThe build script copies this README into the image and handles tagging:
# From project root
bash backend/scripts/build.sh
# Or with custom tag
bash backend/scripts/build.sh v1.0.0docker compose build orchestra# Copy README first, then build
cp docker/README.md backend/README.md
cd backend
docker build -t orchestra:local .| Variable | Description | Default |
|---|---|---|
APP_ENV |
Environment (development/production) | development |
APP_LOG_LEVEL |
Logging level | DEBUG |
APP_SECRET_KEY |
Application secret key | - |
JWT_SECRET_KEY |
JWT signing key | - |
USER_AGENT |
User agent string for requests | enso-dev |
TEST_USER_ID |
Test user UUID | - |
| Variable | Description | Default |
|---|---|---|
POSTGRES_CONNECTION_STRING |
PostgreSQL connection string | - |
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | - |
GROQ_API_KEY |
Groq API key | - |
ANTHROPIC_API_KEY |
Anthropic API key | - |
XAI_API_KEY |
xAI API key | - |
OLLAMA_BASE_URL |
Ollama server URL | - |
| Variable | Description | Default |
|---|---|---|
SEARX_SEARCH_HOST_URL |
SearXNG search endpoint | http://localhost:8080 |
SHELL_EXEC_SERVER_URL |
Shell execution endpoint | http://localhost:3005/exec |
TAVILY_API_KEY |
Tavily search API key | - |
| Variable | Description | Default |
|---|---|---|
REDIS_URL |
Redis connection for task queue | - |
DISTRIBUTED_WORKERS |
Enable distributed worker mode | false |
Note: When enabled, run the worker process separately:
make dev.worker
| Variable | Description | Default |
|---|---|---|
MINIO_HOST |
MinIO/S3 host URL | - |
S3_REGION |
S3 region | - |
ACCESS_KEY_ID |
S3 access key | - |
ACCESS_SECRET_KEY |
S3 secret key | - |
BUCKET |
S3 bucket name | enso_dev |
Run migrations inside the container:
# Using docker compose exec
docker compose exec orchestra alembic upgrade head
# Or run migrations before starting
docker compose run --rm orchestra alembic upgrade head- Generate strong values for
APP_SECRET_KEYandJWT_SECRET_KEY - Use SSL/TLS termination (nginx, traefik, etc.)
- Restrict database access to internal networks
- Never expose
.envfiles
- Configure appropriate resource limits in
docker-compose.yml - Use a reverse proxy for load balancing
- Enable PostgreSQL connection pooling for high traffic
The Dockerfile uses a multi-stage build:
- Builder Stage: Installs dependencies, compiles Python to bytecode (
.pyc) - Runtime Stage: Ships only compiled bytecode for smaller image size
Note: Migration files (
.py) are preserved since Alembic requires source files.
# Check logs
docker compose logs orchestra
# Verify environment file exists
ls -la backend/.env.docker# Ensure postgres is running
docker compose ps postgres
# Check postgres logs
docker compose logs postgres# Check what's using the port
lsof -i :8000
# Or change the port mapping in docker-compose.yml
ports:
- "8001:8000" # Map to different host port