TinyAGI supports running multiple AI agents simultaneously, each with its own isolated workspace, configuration, and conversation state. This allows you to have specialized agents for different tasks while maintaining complete isolation.
The agent management feature enables you to:
- Run multiple agents with different models, providers, and configurations
- Route messages to specific agents using
@agent_idsyntax - Isolate conversations - each agent has its own workspace directory and conversation history
- Specialize agents - give each agent a custom system prompt and configuration
- Switch providers - mix Anthropic (Claude) and OpenAI (Codex) agents
- Customize workspaces - organize agents in your own workspace directory
┌─────────────────────────────────────────────────────────────┐
│ Message Channels │
│ (Discord, Telegram, WhatsApp) │
└────────────────────┬────────────────────────────────────────┘
│
│ User sends: "@coder fix the bug"
↓
┌─────────────────────────────────────────────────────────────┐
│ Queue Processor │
│ • Parses @agent_id routing prefix │
│ • Falls back to default agent if no prefix │
│ • Loads agent configuration from settings.json │
└────────────────────┬────────────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ Agent Router │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ @coder │ │ @writer │ │ @assistant │ │
│ │ │ │ │ │ (default) │ │
│ │ Provider: │ │ Provider: │ │ Provider: │ │
│ │ anthropic │ │ openai │ │ anthropic │ │
│ │ Model: │ │ Model: │ │ Model: │ │
│ │ sonnet │ │ gpt-5.3-codex│ │ opus │ │
│ │ │ │ │ │ │ │
│ │ Workspace: │ │ Workspace: │ │ Workspace: │ │
│ │ ~/workspace/ │ │ ~/workspace/ │ │ ~/workspace/ │ │
│ │ coder/ │ │ writer/ │ │ assistant/ │ │
│ │ │ │ │ │ │ │
│ │ Config: │ │ Config: │ │ Config: │ │
│ │ .claude/ │ │ .claude/ │ │ .claude/ │ │
│ │ heartbeat.md │ │ heartbeat.md │ │ heartbeat.md │ │
│ │ AGENTS.md │ │ AGENTS.md │ │ AGENTS.md │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ Shared: ~/.tinyagi/ (channels, files, logs, tinyagi.db) │
└─────────────────────────────────────────────────────────────┘
When a message arrives, the queue processor parses it for routing:
// User sends: "@coder fix the authentication bug"
const routing = parseAgentRouting(rawMessage, agents);
// Result: { agentId: "coder", message: "fix the authentication bug" }Routing Rules:
- Message starts with
@agent_id→ Routes to that agent - No prefix → Routes to default agent (user-named during setup)
- Agent not found → Falls back to default agent
- No agents configured → Uses legacy single-agent mode
Each agent has its own configuration in .tinyagi/settings.json:
{
"workspace": {
"path": "/Users/me/tinyagi-workspace",
"name": "tinyagi-workspace"
},
"agents": {
"coder": {
"name": "Code Assistant",
"provider": "anthropic",
"model": "sonnet",
"working_directory": "/Users/me/tinyagi-workspace/coder",
"system_prompt": "You are a senior software engineer..."
},
"writer": {
"name": "Technical Writer",
"provider": "openai",
"model": "gpt-5.3-codex",
"working_directory": "/Users/me/tinyagi-workspace/writer",
"prompt_file": "/path/to/writer-prompt.md"
},
"assistant": {
"name": "Assistant",
"provider": "anthropic",
"model": "opus",
"working_directory": "/Users/me/tinyagi-workspace/assistant"
}
}
}Note: The working_directory is automatically set to <workspace>/<agent_id>/ when creating agents via tinyagi agent add.
Each agent has its own isolated workspace directory with complete copies of configuration files:
Agent Workspaces:
~/tinyagi-workspace/ # Or custom workspace name
├── coder/
│ ├── .claude/ # Agent's own Claude config
│ │ ├── settings.json
│ │ ├── settings.local.json
│ │ └── hooks/
│ │ ├── session-start.sh
│ │ └── log-activity.sh
│ ├── heartbeat.md # Agent-specific heartbeat
│ ├── AGENTS.md # Agent-specific docs
│ └── reset_flag # Reset signal
├── writer/
│ ├── .claude/
│ ├── heartbeat.md
│ ├── AGENTS.md
│ └── reset_flag
└── assistant/ # User-named default agent
├── .claude/
├── heartbeat.md
├── AGENTS.md
└── reset_flag
Templates & Shared Resources:
Templates and shared resources are stored in ~/.tinyagi/:
~/.tinyagi/
├── .claude/ # Template: Copied to each new agent
├── heartbeat.md # Template: Copied to each new agent
├── AGENTS.md # Template: Copied to each new agent
├── channels/ # SHARED: Channel state (QR codes, ready flags)
├── files/ # SHARED: Uploaded files from all channels
├── logs/ # SHARED: Log files for all agents and channels
└── tinyagi.db # SHARED: SQLite message queue
How it works:
- Each agent runs CLI commands in its own workspace directory (
~/workspace/agent_id/) - Each agent gets its own copy of
.claude/,heartbeat.md, andAGENTS.mdfrom templates - Agents can customize their settings, hooks, and documentation independently
- Conversation history is isolated per agent (managed by Claude/Codex CLI)
- Reset flags allow resetting individual agent conversations
- File operations happen in the agent's directory
- Templates stored in
~/.tinyagi/are copied when creating new agents - Uploaded files, the SQLite queue, and logs are shared (common dependencies)
The queue processor calls the appropriate CLI based on provider:
Anthropic (Claude):
cd "$agent_working_directory" # e.g., ~/tinyagi-workspace/coder/
claude --dangerously-skip-permissions \
--model claude-sonnet-4-6 \
--system-prompt "Your custom prompt..." \
-c \ # Continue conversation
-p "User message here"OpenAI (Codex):
cd "$agent_working_directory" # e.g., ~/tinyagi-workspace/coder/
codex exec resume --last \
--model gpt-5.3-codex \
--skip-git-repo-check \
--dangerously-bypass-approvals-and-sandbox \
--json \
"User message here"During first-time setup (tinyagi setup), you'll be prompted for:
-
Workspace name - Where to store agent directories
- Default:
tinyagi-workspace - Creates:
~/tinyagi-workspace/
- Default:
-
Default agent name - Name for your main assistant
- Default:
assistant - This replaces the hardcoded "default" agent
- Default:
Interactive CLI:
tinyagi agent addThis walks you through:
- Agent ID (e.g.,
coder) - Display name (e.g.,
Code Assistant) - Provider (Anthropic or OpenAI)
- Model selection
- Optional system prompt
Working directory is automatically set to: <workspace>/<agent_id>/
Manual Configuration:
Edit .tinyagi/settings.json:
{
"workspace": {
"path": "/Users/me/tinyagi-workspace",
"name": "tinyagi-workspace"
},
"agents": {
"researcher": {
"name": "Research Assistant",
"provider": "anthropic",
"model": "opus",
"working_directory": "/Users/me/tinyagi-workspace/researcher",
"system_prompt": "You are a research assistant specialized in academic literature review and data analysis."
}
}
}| Field | Required | Description |
|---|---|---|
name |
Yes | Human-readable display name |
provider |
Yes | anthropic, openai, opencode, or custom:<provider_id> |
model |
Yes | Model identifier (e.g., sonnet, opus, gpt-5.3-codex) |
working_directory |
Yes | Directory where agent operates (auto-set to <workspace>/<agent_id>/) |
system_prompt |
No | Inline system prompt text |
prompt_file |
No | Path to file containing system prompt |
Note:
- If both
prompt_fileandsystem_promptare provided,prompt_filetakes precedence - The
working_directoryis automatically set to<workspace>/<agent_id>/when creating agents - Each agent gets its own isolated directory with copies of templates from
~/.tinyagi/
In any messaging channel (Discord, Telegram, WhatsApp):
@coder fix the authentication bug in login.ts
@writer document the new API endpoints
@researcher find papers on transformer architectures
help me with this (goes to default agent - "assistant" by default)
From chat:
/agents
From CLI:
tinyagi agent listOutput:
Configured Agents
==================
@coder - Code Assistant
Provider: anthropic/sonnet
Directory: /Users/me/tinyagi-workspace/coder
@writer - Technical Writer
Provider: openai/gpt-5.3-codex
Directory: /Users/me/tinyagi-workspace/writer
Prompt: /path/to/writer-prompt.md
@assistant - Assistant
Provider: anthropic/opus
Directory: /Users/me/tinyagi-workspace/assistant
Show agent details:
tinyagi agent show coderReset agent conversation:
tinyagi agent reset coderFrom chat:
@coder /reset
Remove agent:
tinyagi agent remove coderHave different agents for different projects:
{
"workspace": {
"path": "/Users/me/my-workspace"
},
"agents": {
"frontend": {
"working_directory": "/Users/me/my-workspace/frontend",
"system_prompt": "You are a React and TypeScript expert..."
},
"backend": {
"working_directory": "/Users/me/my-workspace/backend",
"system_prompt": "You are a Node.js backend engineer..."
}
}
}Usage:
@frontend add a loading spinner to the dashboard
@backend optimize the database queries in user service
Assign different roles to agents:
{
"agents": {
"reviewer": {
"system_prompt": "You are a code reviewer. Focus on security, performance, and best practices."
},
"debugger": {
"system_prompt": "You are a debugging expert. Help identify and fix bugs systematically."
},
"architect": {
"model": "opus",
"system_prompt": "You are a software architect. Design scalable, maintainable systems."
}
}
}Use different AI providers for different tasks:
{
"custom_providers": {
"openrouter": {
"name": "OpenRouter",
"harness": "claude",
"base_url": "https://openrouter.ai/api/v1",
"api_key": "sk-or-..."
}
},
"agents": {
"quick": {
"provider": "anthropic",
"model": "sonnet",
"system_prompt": "Fast, efficient responses for quick questions."
},
"deep": {
"provider": "anthropic",
"model": "opus",
"system_prompt": "Thorough, detailed analysis for complex problems."
},
"codegen": {
"provider": "openai",
"model": "gpt-5.3-codex",
"system_prompt": "Code generation specialist."
},
"proxy-agent": {
"provider": "custom:openrouter",
"model": "claude-sonnet-4-6",
"system_prompt": "Uses a custom API endpoint."
}
}
}You can pre-route messages from channel clients by setting the agent field:
// In channel client (discord-client.ts, etc.)
const queueData: QueueData = {
channel: 'discord',
message: userMessage,
agent: 'coder', // Pre-route to specific agent
// ...
};If no agents are configured, TinyAGI automatically creates a default agent using the legacy models section:
{
"models": {
"provider": "anthropic",
"anthropic": {
"model": "sonnet"
}
}
}This ensures backward compatibility with older configurations.
The tinyagi model and tinyagi provider --model commands update both the global default and propagate to all matching agents:
tinyagi model sonnet— updates.models.anthropic.modeland setsmodel = "sonnet"on every agent withprovider == "anthropic".tinyagi model gpt-5.3-codex— updates.models.openai.modeland setsmodel = "gpt-5.3-codex"on every agent withprovider == "openai".tinyagi provider openai --model gpt-5.3-codex— switches the global provider, and updates all agents that were on the old provider to the new provider and model.tinyagi provider anthropic(no--model) — only switches the global default; agents are not changed.
To change a single agent's provider/model without affecting others, use:
tinyagi agent provider <agent_id> <provider> --model <model>Running tinyagi model or tinyagi provider with no arguments shows the global default followed by a per-agent breakdown.
Per-agent reset: <workspace>/<agent_id>/reset_flag - resets a specific agent's conversation.
Reset flags are automatically cleaned up after use.
Reset one or more agents:
tinyagi reset coder
tinyagi reset coder researcherYou can create multiple workspaces for different purposes:
{
"workspace": {
"path": "/Users/me/work-projects",
"name": "work-projects"
}
}Or even use cloud-synced directories:
{
"workspace": {
"path": "/Users/me/Dropbox/tinyagi-workspace",
"name": "tinyagi-workspace"
}
}Files uploaded through messaging channels are automatically available to all agents:
User uploads image.png via Telegram
→ Saved to ~/.tinyagi/files/telegram_123456_image.png
→ Message includes: [file: /path/to/image.png]
→ Routed to agent
→ Agent can read/process the file
Agents can also send files back:
// Agent response includes:
response = "Here's the diagram [send_file: /path/to/diagram.png]";
// File is extracted and sent back through channelFor detailed troubleshooting of agent-related issues, see TROUBLESHOOTING.md.
Quick reference:
- Agent not found → Check:
tinyagi agent list - Wrong agent responding → Verify routing:
@agent_id message(with space) - Conversation not resetting → Send message after:
tinyagi agent reset <id> - CLI not found → Install Claude Code or Codex CLI
- Workspace issues → Check:
cat .tinyagi/settings.json | jq '.workspace' - Templates not copying → Run:
tinyagi setup
Queue Processor (src/queue-processor.ts):
getSettings()- Loads settings from JSONgetAgents()- Returns agent configurations (checks.agents)parseAgentRouting()- Parses @agent_id prefixprocessMessage()- Main routing and execution logic
Message Interfaces:
interface MessageData {
agent?: string; // Pre-routed agent ID
files?: string[]; // Uploaded file paths
// ...
}
interface ResponseData {
agent?: string; // Which agent handled this
files?: string[]; // Files to send back
// ...
}Templates:
~/.tinyagi/
├── .claude/ # Copied to new agents
├── heartbeat.md # Copied to new agents
└── AGENTS.md # Copied to new agents
Agent State:
<workspace>/
└── {agent_id}/
├── .claude/ # Agent's own config
├── heartbeat.md # Agent's own monitoring
├── AGENTS.md # Agent's own docs
└── reset_flag # Touch to reset conversation
State is managed by the CLI itself (claude or codex) through the -c flag and working directory isolation.
Custom providers let you use any OpenAI or Anthropic-compatible API endpoint (e.g., proxy servers, self-hosted models, OpenRouter) with the existing CLI harnesses.
Custom providers are defined in .tinyagi/settings.json:
{
"custom_providers": {
"my-proxy": {
"name": "My Proxy",
"harness": "claude",
"base_url": "https://proxy.example.com/v1",
"api_key": "sk-...",
"model": "claude-sonnet-4-6"
}
}
}| Field | Required | Description |
|---|---|---|
name |
Yes | Human-readable display name |
harness |
Yes | Which CLI to use: claude or codex |
base_url |
Yes | API endpoint URL |
api_key |
Yes | API key for authentication |
model |
No | Default model name to pass to the CLI |
CLI:
tinyagi provider list # List custom providers
tinyagi provider add # Add interactively
tinyagi provider remove my-proxy # Remove a custom providerAPI:
# List
curl http://localhost:3777/api/custom-providers
# Create/update
curl -X PUT http://localhost:3777/api/custom-providers/my-proxy \
-H 'Content-Type: application/json' \
-d '{"name":"My Proxy","harness":"claude","base_url":"https://proxy.example.com/v1","api_key":"sk-...","model":"claude-sonnet-4-6"}'
# Delete
curl -X DELETE http://localhost:3777/api/custom-providers/my-proxyUse the custom:<provider_id> prefix as the agent's provider:
# When adding a new agent (option 4 in provider selection)
tinyagi agent add
# Switch an existing agent
tinyagi agent provider coder custom:my-proxy
tinyagi agent provider coder custom:my-proxy --model gpt-4oOr edit settings.json directly:
{
"agents": {
"coder": {
"name": "Code Assistant",
"provider": "custom:my-proxy",
"model": "claude-sonnet-4-6",
"working_directory": "/Users/me/workspace/coder"
}
}
}When an agent with provider: "custom:<id>" is invoked:
- The custom provider config is looked up from
settings.custom_providers - The
harnessfield determines which CLI to run (claudeorcodex) - Environment variables are set based on the harness:
- claude harness:
ANTHROPIC_BASE_URL,ANTHROPIC_AUTH_TOKEN,ANTHROPIC_API_KEY="" - codex harness:
OPENAI_API_KEY,OPENAI_BASE_URL
- claude harness:
- The CLI is invoked with the model name passed through (no alias resolution)
Teams are named groups of agents that can collaborate by forwarding messages to each other via @teammate mentions in their responses.
- User sends
@dev fix the auth bug(wheredevis a team with leadercoder) - Queue processor resolves
@dev→ team → leader agent@coder - Coder's AI responds:
"I fixed the bug in auth.ts. @reviewer please check my changes" - Queue processor scans response, sees
@revieweris a teammate in teamdev - Queue processor calls reviewer with coder's response (prefixed with context)
- Reviewer responds:
"Changes look good, approved!" - Combined response sent to user:
@coder: ... \n---\n @reviewer: ...
The chain ends naturally when an agent responds without mentioning a teammate.
Teams are stored in ~/.tinyagi/settings.json:
{
"teams": {
"dev": {
"name": "Development Team",
"agents": ["coder", "reviewer"],
"leader_agent": "coder"
}
}
}| Field | Description |
|---|---|
name |
Human-readable display name |
agents |
Array of agent IDs (must exist in .agents) |
leader_agent |
Agent that receives @team_id messages first |
Team IDs share the @ routing namespace with agents, so no collisions are allowed.
CLI Commands:
tinyagi team list # List all teams
tinyagi team add # Add a new team (interactive)
tinyagi team show dev # Show team configuration
tinyagi team remove dev # Remove a team
tinyagi team add-agent dev reviewer # Add an existing agent to a team
tinyagi team remove-agent dev reviewer # Remove an agent from a teamIn-chat Commands:
/team # List all teams
@dev fix the auth bug # Route to team leader
@coder fix the auth bug # Route directly to agent (team context still active)
When you message an agent directly (e.g., @coder fix this), team context is automatically activated if the agent belongs to a team. This means teammate mentions in the agent's response will still be followed.
When an agent is added to a team, its AGENTS.md file is automatically updated with a team collaboration section listing teammates and instructions for using @teammate_id mentions.
Potential features for agent management:
- Shared context: Optional shared memory between agents
- Agent scheduling: Time-based or event-based agent activation
- Web dashboard: Visual agent management and monitoring
- Agent analytics: Track usage, performance per agent
- Workspace templates: Pre-configured agent workspaces for common use cases
- Agent migration: Export/import agent configurations
- README.md - Main project documentation
- Setup wizard:
tinyagi setup - Agent CLI:
tinyagi agent --help