Skip to content

simstudioai/sim

Sim Logo

Build and deploy AI agent workflows in minutes.

Sim.ai Discord Twitter Documentation

Build Workflows with Ease

Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.

Workflow Builder Demo

Supercharge with Copilot

Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.

Copilot Demo

Integrate Vector Databases

Upload documents to a vector store and let agents answer questions grounded in your specific content.

Knowledge Uploads and Retrieval Demo

Quickstart

Cloud-hosted: sim.ai

Sim.ai

Self-hosted: NPM Package

npx simstudio

http://localhost:3000

Note

Docker must be installed and running on your machine.

Options

Flag Description
-p, --port <port> Port to run Sim on (default 3000)
--no-pull Skip pulling latest Docker images

Self-hosted: Docker Compose

# Clone the repository
git clone https://github.com/simstudioai/sim.git

# Navigate to the project directory
cd sim

# Start Sim
docker compose -f docker-compose.prod.yml up -d

Access the application at http://localhost:3000/

Using Local Models with Ollama

Run Sim with local AI models using Ollama - no external APIs required:

# Start with GPU support (automatically downloads gemma3:4b model)
docker compose -f docker-compose.ollama.yml --profile setup up -d

# For CPU-only systems:
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d

Wait for the model to download, then visit http://localhost:3000. Add more models with:

docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b

Using an External Ollama Instance

If you already have Ollama running on your host machine (outside Docker), you need to configure the OLLAMA_URL to use host.docker.internal instead of localhost:

# Docker Desktop (macOS/Windows)
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d

# Linux (add extra_hosts or use host IP)
docker compose -f docker-compose.prod.yml up -d  # Then set OLLAMA_URL to your host's IP

Why? When running inside Docker, localhost refers to the container itself, not your host machine. host.docker.internal is a special DNS name that resolves to the host.

For Linux users, you can either:

  • Use your host machine's actual IP address (e.g., http://192.168.1.100:11434)
  • Add extra_hosts: ["host.docker.internal:host-gateway"] to the simstudio service in your compose file

Using vLLM

Sim also supports vLLM for self-hosted models with OpenAI-compatible API:

# Set these environment variables
VLLM_BASE_URL=http://your-vllm-server:8000
VLLM_API_KEY=your_optional_api_key  # Only if your vLLM instance requires auth

When running with Docker, use host.docker.internal if vLLM is on your host machine (same as Ollama above).

Self-hosted: Dev Containers

  1. Open VS Code with the Remote - Containers extension
  2. Open the project and click "Reopen in Container" when prompted
  3. Run bun run dev:full in the terminal or use the sim-start alias
    • This starts both the main application and the realtime socket server

Self-hosted: Manual Setup

Requirements:

Note: Sim uses vector embeddings for AI features like knowledge bases and semantic search, which requires the pgvector PostgreSQL extension.

  1. Clone and install dependencies:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install
  1. Set up PostgreSQL with pgvector:

You need PostgreSQL with the vector extension for embedding support. Choose one option:

Option A: Using Docker (Recommended)

# Start PostgreSQL with pgvector extension
docker run --name simstudio-db \
  -e POSTGRES_PASSWORD=your_password \
  -e POSTGRES_DB=simstudio \
  -p 5432:5432 -d \
  pgvector/pgvector:pg17

Option B: Manual Installation

  1. Set up environment:
cd apps/sim
cp .env.example .env  # Configure with required variables (DATABASE_URL, BETTER_AUTH_SECRET, BETTER_AUTH_URL)

Update your .env file with the database URL:

DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"
  1. Set up the database:

First, configure the database package environment:

cd packages/db
cp .env.example .env 

Update your packages/db/.env file with the database URL:

DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"

Then run the migrations:

cd packages/db # Required so drizzle picks correct .env file
bunx drizzle-kit migrate --config=./drizzle.config.ts
  1. Start the development servers:

Recommended approach - run both servers together (from project root):

bun run dev:full

This starts both the main Next.js application and the realtime socket server required for full functionality.

Alternative - run servers separately:

Next.js app (from project root):

bun run dev

Realtime socket server (from apps/sim directory in a separate terminal):

cd apps/sim
bun run dev:sockets

Copilot API Keys

Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:

  • Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
  • Set COPILOT_API_KEY environment variable in your self-hosted apps/sim/.env file to that value

Environment Variables

Key environment variables for self-hosted deployments (see apps/sim/.env.example for full list):

Variable Required Description
DATABASE_URL Yes PostgreSQL connection string with pgvector
BETTER_AUTH_SECRET Yes Auth secret (openssl rand -hex 32)
BETTER_AUTH_URL Yes Your app URL (e.g., http://localhost:3000)
NEXT_PUBLIC_APP_URL Yes Public app URL (same as above)
ENCRYPTION_KEY Yes Encryption key (openssl rand -hex 32)
OLLAMA_URL No Ollama server URL (default: http://localhost:11434)
VLLM_BASE_URL No vLLM server URL for self-hosted models
COPILOT_API_KEY No API key from sim.ai for Copilot features

Troubleshooting

Ollama models not showing in dropdown (Docker)

If you're running Ollama on your host machine and Sim in Docker, change OLLAMA_URL from localhost to host.docker.internal:

OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d

See Using an External Ollama Instance for details.

Database connection issues

Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.

Port conflicts

If ports 3000, 3002, or 5432 are in use, configure alternatives:

# Custom ports
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -d

Export Workflows as Standalone Services

Export any workflow as a self-contained Python/FastAPI service that can be deployed independently via Docker, Railway, or any container platform.

Quick Start

  1. Right-click a workflow in the sidebar
  2. Select "Export as Service"
  3. Extract the ZIP file
  4. Configure .env with your API keys
  5. Run the service:
# With Docker (recommended)
docker compose up -d

# Or run directly
pip install -r requirements.txt
uvicorn main:app --port 8080

# Execute the workflow
curl -X POST http://localhost:8080/execute \
  -H "Content-Type: application/json" \
  -d '{"your": "input"}'

Exported Files

File Description
workflow.json Workflow definition (blocks, connections, configuration)
.env Environment variables with your decrypted API keys
.env.example Template without sensitive values (safe to commit)
main.py FastAPI server with /execute, /health, /ready endpoints
executor.py DAG execution engine
handlers/ Block type handlers (agent, function, condition, etc.)
tools.py Native file operation tools
resolver.py Variable and input resolution
Dockerfile Container configuration
docker-compose.yml Docker Compose setup with volume mounts
requirements.txt Python dependencies
README.md Usage instructions

Multi-Provider LLM Support

The exported service automatically detects and routes to the correct provider based on model name:

Provider Models Environment Variable
Anthropic Claude 4 (Opus, Sonnet), Claude 3.5, Claude 3 ANTHROPIC_API_KEY
OpenAI GPT-4, GPT-4o, o1, o3 OPENAI_API_KEY
Google Gemini Pro, Gemini Flash GOOGLE_API_KEY

Provider is detected from the model name (e.g., claude-sonnet-4-20250514 → Anthropic, gpt-4o → OpenAI).

Supported Block Types

Block Type Description
Start/Trigger Entry point for workflow execution
Agent LLM calls with tool support (MCP and native)
Function Custom code (JavaScript auto-transpiled to Python)
Condition Branching logic with safe expression evaluation
Router Multi-path routing based on conditions
API HTTP requests to external services
Loop Iteration (for, forEach, while, doWhile)
Variables State management across blocks
Response Final output formatting

File Operations

Agents can perform file operations in two ways:

Option 1: Local File Tools (WORKSPACE_DIR)

Set WORKSPACE_DIR in .env to enable sandboxed local file operations:

# In .env
WORKSPACE_DIR=./workspace

When enabled, agents automatically get access to these tools:

Tool Description
local_write_file Write text content to a file
local_write_bytes Write binary data (images, PDFs) as base64
local_append_file Append text to a file (creates if not exists)
local_read_file Read text content from a file
local_read_bytes Read binary data as base64
local_delete_file Delete a file
local_list_directory List files with metadata (size, modified time)

Enable Command Execution (opt-in for security):

# In .env
WORKSPACE_DIR=./workspace
ENABLE_COMMAND_EXECUTION=true

When enabled, agents also get:

Tool Description
local_execute_command Run commands like python script.py or node process.js

Shell operators (|, >, &&, etc.) are blocked for security.

File Size Limits:

# Default: 100MB. Set custom limit in bytes:
MAX_FILE_SIZE=52428800  # 50MB

Security: All paths are sandboxed to WORKSPACE_DIR. Path traversal attacks (../) and symlink escapes are blocked. Agents cannot access files outside the workspace directory.

With Docker: The docker-compose.yml mounts ./output on your host to /app/workspace in the container:

docker compose up -d
# Files written by agents appear in ./output/ on your host machine

Option 2: MCP Filesystem Tools

If your workflow uses MCP filesystem servers, those tools work as configured. MCP servers handle file operations on their own systems—paths and permissions are determined by the MCP server's configuration.

Using Both Together

You can enable both options simultaneously. If WORKSPACE_DIR is set, agents will have access to:

  • Local file tools (local_write_file, etc.) for the sandboxed workspace
  • MCP tools for external filesystem servers

The LLM chooses the appropriate tool based on the tool descriptions and context.

Health Check with Workspace Status

The /health endpoint returns workspace configuration status:

{
  "status": "healthy",
  "workspace": {
    "enabled": true,
    "workspace_dir": "/app/workspace",
    "command_execution_enabled": false,
    "max_file_size": 104857600
  }
}

API Endpoints

The exported service provides these endpoints:

Endpoint Method Description
/execute POST Execute the workflow with input data
/health GET Health check (returns {"status": "healthy"})
/ready GET Readiness check

Example execution:

curl -X POST http://localhost:8080/execute \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Analyze this data",
    "data": {"key": "value"}
  }'

Docker Deployment

# Build and run with Docker Compose
docker compose up -d

# View logs
docker compose logs -f

# Stop
docker compose down

Manual Docker build:

docker build -t my-workflow .
docker run -p 8080:8080 --env-file .env my-workflow

Production Configuration

Environment Variable Default Description
HOST 0.0.0.0 Server bind address
PORT 8080 Server port
WORKSPACE_DIR (disabled) Enable local file tools with sandbox path
ENABLE_COMMAND_EXECUTION false Allow agents to execute commands
MAX_FILE_SIZE 104857600 (100MB) Maximum file size in bytes
WORKFLOW_PATH workflow.json Path to workflow definition
RATE_LIMIT_REQUESTS 60 Max requests per rate limit window
RATE_LIMIT_WINDOW 60 Rate limit window in seconds
MAX_REQUEST_SIZE 10485760 (10MB) Maximum HTTP request body size
LOG_LEVEL INFO Logging level (DEBUG, INFO, WARNING, ERROR)

Security

The exported service implements multiple security measures:

  • No eval(): All condition evaluation uses safe AST-based parsing
  • No shell=True: Commands executed without shell to prevent injection
  • Sandboxed file operations: All paths restricted to WORKSPACE_DIR
  • Shell operator rejection: Pipes, redirects, and command chaining blocked
  • Path traversal protection: .. and symlink escapes blocked
  • File size limits: Configurable max file size (default 100MB)
  • Input validation: Request size limits (default 10MB)
  • Rate limiting: Configurable request rate limits (default 60/min)

MCP Tool Support

The exported service supports MCP (Model Context Protocol) tools via the official Python SDK. MCP servers must be running and accessible at their configured URLs.

MCP tools configured in your workflow are automatically available to agent blocks. The service connects to MCP servers via Streamable HTTP transport.

Export Validation

Before export, the service validates your workflow for compatibility:

  • Unsupported block types: Shows which blocks cannot be exported
  • Unsupported providers: Shows which LLM providers are not yet supported
  • Clear error messages: Displayed via notification system with actionable feedback

If validation fails, you'll see a notification explaining what needs to be changed.

Tech Stack

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Made with ❤️ by the Sim Team