Build and deploy AI agent workflows in minutes.
Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.
Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.
Upload documents to a vector store and let agents answer questions grounded in your specific content.
Cloud-hosted: sim.ai
npx simstudioDocker must be installed and running on your machine.
| Flag | Description |
|---|---|
-p, --port <port> |
Port to run Sim on (default 3000) |
--no-pull |
Skip pulling latest Docker images |
# Clone the repository
git clone https://github.com/simstudioai/sim.git
# Navigate to the project directory
cd sim
# Start Sim
docker compose -f docker-compose.prod.yml up -dAccess the application at http://localhost:3000/
Run Sim with local AI models using Ollama - no external APIs required:
# Start with GPU support (automatically downloads gemma3:4b model)
docker compose -f docker-compose.ollama.yml --profile setup up -d
# For CPU-only systems:
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -dWait for the model to download, then visit http://localhost:3000. Add more models with:
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8bIf you already have Ollama running on your host machine (outside Docker), you need to configure the OLLAMA_URL to use host.docker.internal instead of localhost:
# Docker Desktop (macOS/Windows)
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d
# Linux (add extra_hosts or use host IP)
docker compose -f docker-compose.prod.yml up -d # Then set OLLAMA_URL to your host's IPWhy? When running inside Docker, localhost refers to the container itself, not your host machine. host.docker.internal is a special DNS name that resolves to the host.
For Linux users, you can either:
- Use your host machine's actual IP address (e.g.,
http://192.168.1.100:11434) - Add
extra_hosts: ["host.docker.internal:host-gateway"]to the simstudio service in your compose file
Sim also supports vLLM for self-hosted models with OpenAI-compatible API:
# Set these environment variables
VLLM_BASE_URL=http://your-vllm-server:8000
VLLM_API_KEY=your_optional_api_key # Only if your vLLM instance requires authWhen running with Docker, use host.docker.internal if vLLM is on your host machine (same as Ollama above).
- Open VS Code with the Remote - Containers extension
- Open the project and click "Reopen in Container" when prompted
- Run
bun run dev:fullin the terminal or use thesim-startalias- This starts both the main application and the realtime socket server
Requirements:
- Bun runtime
- Node.js v20+ (required for sandboxed code execution)
- PostgreSQL 12+ with pgvector extension (required for AI embeddings)
Note: Sim uses vector embeddings for AI features like knowledge bases and semantic search, which requires the pgvector PostgreSQL extension.
- Clone and install dependencies:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install- Set up PostgreSQL with pgvector:
You need PostgreSQL with the vector extension for embedding support. Choose one option:
Option A: Using Docker (Recommended)
# Start PostgreSQL with pgvector extension
docker run --name simstudio-db \
-e POSTGRES_PASSWORD=your_password \
-e POSTGRES_DB=simstudio \
-p 5432:5432 -d \
pgvector/pgvector:pg17Option B: Manual Installation
- Install PostgreSQL 12+ and the pgvector extension
- See pgvector installation guide
- Set up environment:
cd apps/sim
cp .env.example .env # Configure with required variables (DATABASE_URL, BETTER_AUTH_SECRET, BETTER_AUTH_URL)Update your .env file with the database URL:
DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"- Set up the database:
First, configure the database package environment:
cd packages/db
cp .env.example .env Update your packages/db/.env file with the database URL:
DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"Then run the migrations:
cd packages/db # Required so drizzle picks correct .env file
bunx drizzle-kit migrate --config=./drizzle.config.ts- Start the development servers:
Recommended approach - run both servers together (from project root):
bun run dev:fullThis starts both the main Next.js application and the realtime socket server required for full functionality.
Alternative - run servers separately:
Next.js app (from project root):
bun run devRealtime socket server (from apps/sim directory in a separate terminal):
cd apps/sim
bun run dev:socketsCopilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set
COPILOT_API_KEYenvironment variable in your self-hosted apps/sim/.env file to that value
Key environment variables for self-hosted deployments (see apps/sim/.env.example for full list):
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | PostgreSQL connection string with pgvector |
BETTER_AUTH_SECRET |
Yes | Auth secret (openssl rand -hex 32) |
BETTER_AUTH_URL |
Yes | Your app URL (e.g., http://localhost:3000) |
NEXT_PUBLIC_APP_URL |
Yes | Public app URL (same as above) |
ENCRYPTION_KEY |
Yes | Encryption key (openssl rand -hex 32) |
OLLAMA_URL |
No | Ollama server URL (default: http://localhost:11434) |
VLLM_BASE_URL |
No | vLLM server URL for self-hosted models |
COPILOT_API_KEY |
No | API key from sim.ai for Copilot features |
If you're running Ollama on your host machine and Sim in Docker, change OLLAMA_URL from localhost to host.docker.internal:
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -dSee Using an External Ollama Instance for details.
Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.
If ports 3000, 3002, or 5432 are in use, configure alternatives:
# Custom ports
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -dExport any workflow as a self-contained Python/FastAPI service that can be deployed independently via Docker, Railway, or any container platform.
- Right-click a workflow in the sidebar
- Select "Export as Service"
- Extract the ZIP file
- Configure
.envwith your API keys - Run the service:
# With Docker (recommended)
docker compose up -d
# Or run directly
pip install -r requirements.txt
uvicorn main:app --port 8080
# Execute the workflow
curl -X POST http://localhost:8080/execute \
-H "Content-Type: application/json" \
-d '{"your": "input"}'| File | Description |
|---|---|
workflow.json |
Workflow definition (blocks, connections, configuration) |
.env |
Environment variables with your decrypted API keys |
.env.example |
Template without sensitive values (safe to commit) |
main.py |
FastAPI server with /execute, /health, /ready endpoints |
executor.py |
DAG execution engine |
handlers/ |
Block type handlers (agent, function, condition, etc.) |
tools.py |
Native file operation tools |
resolver.py |
Variable and input resolution |
Dockerfile |
Container configuration |
docker-compose.yml |
Docker Compose setup with volume mounts |
requirements.txt |
Python dependencies |
README.md |
Usage instructions |
The exported service automatically detects and routes to the correct provider based on model name:
| Provider | Models | Environment Variable |
|---|---|---|
| Anthropic | Claude 4 (Opus, Sonnet), Claude 3.5, Claude 3 | ANTHROPIC_API_KEY |
| OpenAI | GPT-4, GPT-4o, o1, o3 | OPENAI_API_KEY |
| Gemini Pro, Gemini Flash | GOOGLE_API_KEY |
Provider is detected from the model name (e.g., claude-sonnet-4-20250514 → Anthropic, gpt-4o → OpenAI).
| Block Type | Description |
|---|---|
| Start/Trigger | Entry point for workflow execution |
| Agent | LLM calls with tool support (MCP and native) |
| Function | Custom code (JavaScript auto-transpiled to Python) |
| Condition | Branching logic with safe expression evaluation |
| Router | Multi-path routing based on conditions |
| API | HTTP requests to external services |
| Loop | Iteration (for, forEach, while, doWhile) |
| Variables | State management across blocks |
| Response | Final output formatting |
Agents can perform file operations in two ways:
Set WORKSPACE_DIR in .env to enable sandboxed local file operations:
# In .env
WORKSPACE_DIR=./workspaceWhen enabled, agents automatically get access to these tools:
| Tool | Description |
|---|---|
local_write_file |
Write text content to a file |
local_write_bytes |
Write binary data (images, PDFs) as base64 |
local_append_file |
Append text to a file (creates if not exists) |
local_read_file |
Read text content from a file |
local_read_bytes |
Read binary data as base64 |
local_delete_file |
Delete a file |
local_list_directory |
List files with metadata (size, modified time) |
Enable Command Execution (opt-in for security):
# In .env
WORKSPACE_DIR=./workspace
ENABLE_COMMAND_EXECUTION=trueWhen enabled, agents also get:
| Tool | Description |
|---|---|
local_execute_command |
Run commands like python script.py or node process.js |
Shell operators (|, >, &&, etc.) are blocked for security.
File Size Limits:
# Default: 100MB. Set custom limit in bytes:
MAX_FILE_SIZE=52428800 # 50MBSecurity: All paths are sandboxed to WORKSPACE_DIR. Path traversal attacks (../) and symlink escapes are blocked. Agents cannot access files outside the workspace directory.
With Docker: The docker-compose.yml mounts ./output on your host to /app/workspace in the container:
docker compose up -d
# Files written by agents appear in ./output/ on your host machineIf your workflow uses MCP filesystem servers, those tools work as configured. MCP servers handle file operations on their own systems—paths and permissions are determined by the MCP server's configuration.
You can enable both options simultaneously. If WORKSPACE_DIR is set, agents will have access to:
- Local file tools (
local_write_file, etc.) for the sandboxed workspace - MCP tools for external filesystem servers
The LLM chooses the appropriate tool based on the tool descriptions and context.
The /health endpoint returns workspace configuration status:
{
"status": "healthy",
"workspace": {
"enabled": true,
"workspace_dir": "/app/workspace",
"command_execution_enabled": false,
"max_file_size": 104857600
}
}The exported service provides these endpoints:
| Endpoint | Method | Description |
|---|---|---|
/execute |
POST | Execute the workflow with input data |
/health |
GET | Health check (returns {"status": "healthy"}) |
/ready |
GET | Readiness check |
Example execution:
curl -X POST http://localhost:8080/execute \
-H "Content-Type: application/json" \
-d '{
"message": "Analyze this data",
"data": {"key": "value"}
}'# Build and run with Docker Compose
docker compose up -d
# View logs
docker compose logs -f
# Stop
docker compose downManual Docker build:
docker build -t my-workflow .
docker run -p 8080:8080 --env-file .env my-workflow| Environment Variable | Default | Description |
|---|---|---|
HOST |
0.0.0.0 |
Server bind address |
PORT |
8080 |
Server port |
WORKSPACE_DIR |
(disabled) | Enable local file tools with sandbox path |
ENABLE_COMMAND_EXECUTION |
false |
Allow agents to execute commands |
MAX_FILE_SIZE |
104857600 (100MB) |
Maximum file size in bytes |
WORKFLOW_PATH |
workflow.json |
Path to workflow definition |
RATE_LIMIT_REQUESTS |
60 |
Max requests per rate limit window |
RATE_LIMIT_WINDOW |
60 |
Rate limit window in seconds |
MAX_REQUEST_SIZE |
10485760 (10MB) |
Maximum HTTP request body size |
LOG_LEVEL |
INFO |
Logging level (DEBUG, INFO, WARNING, ERROR) |
The exported service implements multiple security measures:
- No
eval(): All condition evaluation uses safe AST-based parsing - No
shell=True: Commands executed without shell to prevent injection - Sandboxed file operations: All paths restricted to
WORKSPACE_DIR - Shell operator rejection: Pipes, redirects, and command chaining blocked
- Path traversal protection:
..and symlink escapes blocked - File size limits: Configurable max file size (default 100MB)
- Input validation: Request size limits (default 10MB)
- Rate limiting: Configurable request rate limits (default 60/min)
The exported service supports MCP (Model Context Protocol) tools via the official Python SDK. MCP servers must be running and accessible at their configured URLs.
MCP tools configured in your workflow are automatically available to agent blocks. The service connects to MCP servers via Streamable HTTP transport.
Before export, the service validates your workflow for compatibility:
- Unsupported block types: Shows which blocks cannot be exported
- Unsupported providers: Shows which LLM providers are not yet supported
- Clear error messages: Displayed via notification system with actionable feedback
If validation fails, you'll see a notification explaining what needs to be changed.
- Framework: Next.js (App Router)
- Runtime: Bun
- Database: PostgreSQL with Drizzle ORM
- Authentication: Better Auth
- UI: Shadcn, Tailwind CSS
- State Management: Zustand
- Flow Editor: ReactFlow
- Docs: Fumadocs
- Monorepo: Turborepo
- Realtime: Socket.io
- Background Jobs: Trigger.dev
- Remote Code Execution: E2B
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with ❤️ by the Sim Team


