Back to README
Config file: ~/.picoclaw/config.json
Security Configuration: For storing API keys, tokens, and other sensitive data, see the Security Configuration Guide.
You can override default paths using environment variables. This is useful for portable installations, containerized deployments, or running picoclaw as a system service. These variables are independent and control different paths.
| Variable | Description | Default Path |
|---|---|---|
PICOCLAW_CONFIG |
Overrides the path to the configuration file. This directly tells picoclaw which config.json to load, ignoring all other locations. |
~/.picoclaw/config.json |
PICOCLAW_HOME |
Overrides the root directory for picoclaw data. This changes the default location of the workspace and other data directories. |
~/.picoclaw |
Examples:
# Run picoclaw using a specific config file
# The workspace path will be read from within that config file
PICOCLAW_CONFIG=/etc/picoclaw/production.json picoclaw gateway
# Run picoclaw with all its data stored in /opt/picoclaw
# Config will be loaded from the default ~/.picoclaw/config.json
# Workspace will be created at /opt/picoclaw/workspace
PICOCLAW_HOME=/opt/picoclaw picoclaw agent
# Use both for a fully customized setup
PICOCLAW_HOME=/srv/picoclaw PICOCLAW_CONFIG=/srv/picoclaw/main.json picoclaw gatewaygateway.log_level controls Gateway log verbosity and is configurable in config.json.
{
"gateway": {
"log_level": "warn"
}
}When omitted, the default is warn. Supported values: debug, info, warn, error, fatal.
You can also override this with the environment variable PICOCLAW_LOG_LEVEL.
PicoClaw stores data in your configured workspace (default: ~/.picoclaw/workspace):
~/.picoclaw/workspace/
├── sessions/ # Conversation sessions and history
├── memory/ # Long-term memory (MEMORY.md)
├── state/ # Persistent state (last channel, etc.)
├── cron/ # Scheduled jobs database
├── skills/ # Custom skills
├── AGENT.md # Agent behavior guide
├── HEARTBEAT.md # Periodic task prompts (checked every 30 min)
├── IDENTITY.md # Agent identity
├── SOUL.md # Agent soul
└── USER.md # User preferences
Note: Changes to
AGENT.md,SOUL.md,USER.mdandmemory/MEMORY.mdare automatically detected at runtime via file modification time (mtime) tracking. You do not need to restart the gateway after editing these files — the agent picks up the new content on the next request.
The evolution block controls PicoClaw's self-evolution runtime. When enabled, the agent records completed turns as learning records. In higher modes it can group repeated successful patterns, generate skill drafts, and optionally apply accepted drafts into workspace skills.
{
"evolution": {
"enabled": false,
"mode": "observe",
"state_dir": "",
"min_task_count": 2,
"min_success_ratio": 0.7,
"cold_path_trigger": "after_turn",
"cold_path_times": []
}
}| Field | Default | Description |
|---|---|---|
enabled |
false |
Enables learning-record capture for completed agent turns. Heartbeat turns are ignored. |
mode |
observe |
observe records data only. draft can generate candidate skill drafts. apply can apply accepted drafts to workspace skills. |
state_dir |
"" |
Optional directory for evolution state. Leave empty to use the default under the workspace. |
min_task_count |
2 |
Minimum related task records required before a pattern is eligible for draft generation. |
min_success_ratio |
0.7 |
Minimum success ratio for a task cluster. Use a value greater than 0 and up to 1. |
cold_path_trigger |
after_turn |
Runs draft generation after_turn, on a scheduled cadence, or disables automatic cold-path runs when set to manual. There is no user-facing manual trigger yet. Applies only in draft and apply modes. |
cold_path_times |
[] |
Scheduled run times used when cold_path_trigger is scheduled, written as HH:MM strings. |
Use observe first if you want to inspect learning records without generating skill changes. Use draft when you want PicoClaw to prepare reviewable improvements. Use apply only when you are comfortable letting accepted drafts update workspace skills.
picoclaw-launcher serves a browser UI that requires password sign-in first. On first run, open /launcher-setup to create the dashboard password. Later manual sign-ins use /launcher-login.
- Config file: Same directory as
config.json(or the file pointed to byPICOCLAW_CONFIG). The launcher-specific file islauncher-config.json. - Password storage: On supported platforms, the password is stored as a bcrypt hash in
launcher-auth.db. On platforms where the SQLite password store is unavailable, the bcrypt hash is stored inlauncher-config.json. - Legacy migration: Older
launcher_tokenvalues are migrated once into password login and removed from saved launcher config. - Local auto-login: When the launcher auto-opens a local browser after startup, it uses a one-shot loopback-only bootstrap endpoint to set the session cookie automatically.
- Unsupported auth paths: URL token login (
?token=...),PICOCLAW_LAUNCHER_TOKEN, andAuthorization: Bearerdashboard auth are no longer supported. - Sign-out: Use
POST /api/auth/logoutwithContent-Type: application/json(body may be{}). Do not rely on a GET URL for logout (CSRF-safe pattern). - Brute-force:
POST /api/auth/loginis rate-limited per client IP per minute (HTTP 429 when exceeded). - Session lifetime: The HttpOnly session cookie lasts about 31 days by default, but sessions are invalidated when the launcher process restarts.
By default, skills are loaded from:
~/.picoclaw/workspace/skills(workspace)~/.picoclaw/skills(global)<binary-embedded-path>/skills(builtin, set at build time)
For advanced/test setups, you can override the builtin skills root with:
export PICOCLAW_BUILTIN_SKILLS=/path/to/skillsOnce skills are installed, and MCP servers are configured, you can inspect and force them directly from a chat channel:
/list skillsshows the installed skill names available to the current agent./list mcpshows configured MCP servers with enabled/deferred/connected status./show mcp <server>shows the active tools exposed by a connected MCP server./use <skill> <message>forces a specific skill for a single request./use <skill>arms that skill for your next message in the same chat session./use clearcancels a pending skill override created by/use <skill>./btw <question>asks an immediate side question without changing the current session history./btwis handled as a no-tool query and does not enter the normal tool-execution flow.
Examples:
/list skills
/list mcp
/show mcp github
/use git explain how to squash the last 3 commits
/btw remind me what we already decided about the deploy plan
/use italiapersonalfinance
dammi le ultime news
- Generic slash commands are executed through a single path in
pkg/agent/loop.goviacommands.Executor. - Channel adapters no longer consume generic commands locally; they forward inbound text to the bus/agent path. Telegram still auto-registers supported commands such as
/start,/help,/show,/list,/use, and/btwat startup. - Unknown slash command (for example
/foo) passes through to normal LLM processing. - Registered but unsupported command on the current channel (for example
/showon WhatsApp) returns an explicit user-facing error and stops further processing.
Session scope controls how much memory is shared between chats, users, threads, and spaces.
- Use
session.dimensionsfor the global default. - Use
session_dimensionson a dispatch rule for one routed exception.
For step-by-step recipes and isolation patterns, see the Session Guide.
Routing is configured through agents.dispatch.rules.
Each rule matches against the normalized inbound context produced by channels. Rules are evaluated from top to bottom. The first matching rule wins. If no rule matches, PicoClaw falls back to the configured default agent.
Supported match fields:
channelaccountspacechattopicsendermentioned
Match values use the same scope vocabulary as the session system:
space:workspace:t001,guild:123456chat:direct:user123,group:-100123,channel:c123topic:topic:42sender: a normalized sender identifier for the platform
Rules may optionally override the global session.dimensions value through
session_dimensions. This allows routing and session allocation to stay aligned
without reintroducing the old bindings or dm_scope formats.
Example:
{
"agents": {
"list": [
{ "id": "main", "default": true },
{ "id": "support" },
{ "id": "sales" }
],
"dispatch": {
"rules": [
{
"name": "vip in support group",
"agent": "sales",
"when": {
"channel": "telegram",
"chat": "group:-1001234567890",
"sender": "12345"
},
"session_dimensions": ["chat", "sender"]
},
{
"name": "telegram support group",
"agent": "support",
"when": {
"channel": "telegram",
"chat": "group:-1001234567890"
},
"session_dimensions": ["chat"]
}
]
}
},
"session": {
"dimensions": ["chat"]
}
}In the example above, the VIP rule must appear before the broader group rule. Because routing is strictly ordered, more specific rules should be placed earlier and broader fallback rules later.
For more complete routing and model-tier examples, see the Routing Guide.
Per-agent tool declarations live in AGENT.md frontmatter, not in config.json.
If tools is omitted from frontmatter, the agent gets the normal globally enabled tool set. If tools is present, PicoClaw registers only the listed runtime tools for that agent.
---
name: Research Agent
description: Specialist for web research and in-depth analysis.
tools: [read_file, write_file, web_search, web_fetch, message]
skills: [deep-research]
mcpServers: [web-index]
---
You are the research agent.Notes:
- This is an allowlist, not a preference hint.
- Tool names are matched against the runtime tool name 1:1.
- Use runtime tool names such as
web_search,web_fetch,spawn,subagent,send_file. - Tool declarations in
AGENT.mdare used by runtime/tooling, but they are not injected into the discovery prompt.
When an agent has spawnable peers and can call spawn, PicoClaw injects a structured agent registry into that agent's system prompt on every turn. No extra list_agents tool call is required.
This registry is intended to make delegation concrete and reliable, especially when using spawn with a target agent_id.
Each entry includes:
| Field | Meaning |
|---|---|
id |
Stable agent id |
name |
Agent identity name from AGENT.md frontmatter |
description |
Agent identity description from AGENT.md frontmatter |
Important behavior:
- The discovery section appears only when the current agent has the
spawntool and includes only peer agents it is permitted to spawn viasubagents.allow_agents. - The current agent and non-spawnable peers are omitted, so the model does not plan against unavailable agents.
- Discovery is intentionally lightweight. It gives the model only the identity it needs to choose a peer:
id,name, anddescription. config.jsonremains the infrastructure layer: workspace, default agent selection, routing, and subagent permissions. Those permissions also gate discovery visibility.AGENT.mdremains the identity layer. Runtime/tool code can still use itstools,skills,mcpServers, andmodelfields when delegation happens.
Example injected shape:
{
"agents": [
{
"id": "research",
"name": "Research Agent",
"description": "Specialist for long-form investigation and web work."
}
]
}In practice, this means a generalist agent can choose a peer based on its role description, then call spawn with the peer's agent_id. The runtime resolves the rest.
PicoClaw runs in a sandboxed environment by default. The agent can only access files and execute commands within the configured workspace.
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"restrict_to_workspace": true
}
}
}| Option | Default | Description |
|---|---|---|
workspace |
~/.picoclaw/workspace |
Working directory for the agent |
restrict_to_workspace |
true |
Restrict file/command access to workspace |
When restrict_to_workspace: true, the following tools are sandboxed:
| Tool | Function | Restriction |
|---|---|---|
read_file |
Read files | Only files within workspace |
write_file |
Write files | Only files within workspace |
list_dir |
List directories | Only directories within workspace |
edit_file |
Edit files | Only files within workspace |
append_file |
Append to files | Only files within workspace |
exec |
Execute commands | Command paths must be within workspace |
Even with restrict_to_workspace: false, the exec tool blocks these dangerous commands:
rm -rf,del /f,rmdir /s— Bulk deletionformat,mkfs,diskpart— Disk formattingdd if=— Disk imaging- Writing to
/dev/sd[a-z]— Direct disk writes shutdown,reboot,poweroff— System shutdown- Fork bomb
:(){ :|:& };:
| Config Key | Type | Default | Description |
|---|---|---|---|
tools.allow_read_paths |
string[] | [] |
Additional paths allowed for reading outside workspace |
tools.allow_write_paths |
string[] | [] |
Additional paths allowed for writing outside workspace |
read_file has two mutually exclusive implementations selected by config. PicoClaw registers exactly one of them at startup:
| Config Key | Type | Default | Description |
|---|---|---|---|
tools.read_file.enabled |
bool | true |
Enables the read_file tool |
tools.read_file.mode |
string | bytes |
Selects the read_file implementation: bytes or lines |
tools.read_file.max_read_file_size |
int | 65536 |
Maximum bytes returned by read_file |
Optimized for arbitrary files and binary-safe pagination.
Parameters:
path(required): File pathoffset(optional): Starting byte offset, default0length(optional): Maximum number of bytes to read, defaultmax_read_file_size
Use bytes when:
- You may read binary files
- You want deterministic byte-range pagination
Text-oriented behavior, optimized for source files, markdown, logs, and configs. The tool reads sequentially by line and stops when the configured byte budget is reached.
Parameters:
path(required): File pathstart_line(optional): Starting line number, 1-indexed and inclusive, default1max_lines(optional): Maximum number of lines to read, default = all remaining lines until EOF or byte budget
Behavior notes:
- Binary-looking files are rejected with guidance to switch
read_filetomode = bytes - Extremely long single lines are truncated rather than skipped
Use mode = lines when:
- The agent mostly reads text files
- You want line-based pagination in prompts and tool calls
- You want cleaner chunks for code review, logs, and documentation
{
"tools": {
"read_file": {
"enabled": true,
"mode": "lines",
"max_read_file_size": 65536
}
}
}| Config Key | Type | Default | Description |
|---|---|---|---|
tools.exec.allow_remote |
bool | false |
Allow exec tool from remote channels (Telegram/Discord etc.) |
tools.exec.enable_deny_patterns |
bool | true |
Enable dangerous command interception |
tools.exec.custom_deny_patterns |
string[] | [] |
Custom regex patterns to block |
tools.exec.custom_allow_patterns |
string[] | [] |
Custom regex patterns to allow |
Security Note: Symlink protection is enabled by default — all file paths are resolved through
filepath.EvalSymlinksbefore whitelist matching, preventing symlink escape attacks.
The exec safety guard only inspects the command line PicoClaw launches directly. It does not recursively inspect child
processes spawned by allowed developer tools such as make, go run, cargo, npm run, or custom build scripts.
That means a top-level command can still compile or launch other binaries after it passes the initial guard check. In practice, treat build scripts, Makefiles, package scripts, and generated binaries as executable code that needs the same level of review as a direct shell command.
For higher-risk environments:
- Review build scripts before execution.
- Prefer approval/manual review for compile-and-run workflows.
- Run PicoClaw inside a container or VM if you need stronger isolation than the built-in guard provides.
[ERROR] tool: Tool execution failed
{tool=exec, error=Command blocked by safety guard (path outside working dir)}
[ERROR] tool: Tool execution failed
{tool=exec, error=Command blocked by safety guard (dangerous pattern detected)}
If you need the agent to access paths outside the workspace:
Method 1: Config file
{
"agents": {
"defaults": {
"restrict_to_workspace": false
}
}
}Method 2: Environment variable
export PICOCLAW_AGENTS_DEFAULTS_RESTRICT_TO_WORKSPACE=false
⚠️ Warning: Disabling this restriction allows the agent to access any path on your system. Use with caution in controlled environments only.
The restrict_to_workspace setting applies consistently across all execution paths:
| Execution Path | Security Boundary |
|---|---|
| Main Agent | restrict_to_workspace ✅ |
| Subagent / Spawn | Inherits same restriction ✅ |
| Heartbeat tasks | Inherits same restriction ✅ |
All paths share the same workspace restriction — there's no way to bypass the security boundary through subagents or scheduled tasks.
PicoClaw can perform periodic tasks automatically. Create a HEARTBEAT.md file in your workspace:
# Periodic Tasks
- Check my email for important messages
- Review my calendar for upcoming events
- Check the weather forecastThe agent will read this file every 30 minutes (configurable) and execute any tasks using available tools.
For long-running tasks (web search, API calls), use the spawn tool to create a subagent:
# Periodic Tasks
## Quick Tasks (respond directly)
- Report current time
## Long Tasks (use spawn for async)
- Search the web for AI news and summarize
- Check email and report important messagesKey behaviors:
| Feature | Description |
|---|---|
| spawn | Creates async subagent, doesn't block heartbeat |
| Independent context | Subagent has its own context, no session history |
| message tool | Subagent communicates with user directly via message tool |
| Non-blocking | After spawning, heartbeat continues to next task |
Heartbeat triggers
↓
Agent reads HEARTBEAT.md
↓
For long task: spawn subagent
↓ ↓
Continue to next task Subagent works independently
↓ ↓
All tasks done Subagent uses "message" tool
↓ ↓
Respond HEARTBEAT_OK User receives result directly
The subagent has access to tools (message, web_search, etc.) and can communicate with the user independently without going through the main agent.
Configuration:
{
"heartbeat": {
"enabled": true,
"interval": 30
}
}| Option | Default | Description |
|---|---|---|
enabled |
true |
Enable/disable heartbeat |
interval |
30 |
Check interval in minutes (min: 5) |
Environment variables:
PICOCLAW_HEARTBEAT_ENABLED=falseto disablePICOCLAW_HEARTBEAT_INTERVAL=60to change interval
Note
Groq provides free voice transcription via Whisper. If configured, audio messages from any channel will be automatically transcribed at the agent level.
| Provider | Purpose | Get API Key |
|---|---|---|
gemini |
LLM (Gemini direct) | aistudio.google.com |
zhipu |
LLM (Zhipu direct) | bigmodel.cn |
volcengine |
LLM (Volcengine direct) | volcengine.com |
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
anthropic |
LLM (Claude direct) | console.anthropic.com |
openai |
LLM (GPT direct) | platform.openai.com |
deepseek |
LLM (DeepSeek direct) | platform.deepseek.com |
qwen |
LLM (Qwen direct) | dashscope.console.aliyun.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
cerebras |
LLM (Cerebras direct) | cerebras.ai |
vivgrid |
LLM (Vivgrid direct) | vivgrid.com |
What's New? PicoClaw now prefers explicit
provider+ nativemodelconfiguration (for example"provider": "zhipu", "model": "glm-4.7"). The legacy single-fieldprovider/modelform remains supported for compatibility whenprovideris omitted.
This design also enables multi-agent support with flexible provider selection:
- Different agents, different providers: Each agent can use its own LLM provider
- Model fallbacks: Configure primary and fallback models for resilience
- Load balancing: Distribute requests across multiple endpoints or keys
- Centralized configuration: Manage all providers in one place
- Model enable/disable: Use the
enabledfield to temporarily disable a model without removing its configuration
PicoClaw supports separating sensitive data (API keys, tokens, secrets) from your main configuration by storing them in a .security.yml file.
Key Benefits:
- Security: Sensitive data is never in your main config file
- Easy sharing: Share config.json without exposing API keys
- Version control: Add
.security.ymlto.gitignore - Flexible deployment: Different environments can use different security files
Quick Setup:
- Create
~/.picoclaw/.security.ymlwith your API keys:
model_list:
gpt-5.4:
api_keys:
- "sk-proj-your-actual-openai-key"
claude-sonnet-4.6:
api_keys:
- "sk-ant-your-actual-anthropic-key"
channels:
telegram:
token: "your-telegram-bot-token"
web:
brave:
api_keys:
- "BSAyour-brave-api-key"
glm_search:
api_key: "your-glm-search-api-key"- Set proper permissions:
chmod 600 ~/.picoclaw/.security.yml- Remove sensitive fields from
config.json(recommended):
{
"model_list": [
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4"
// api_key loaded from .security.yml
}
],
"channel_list": {
"telegram": {
"enabled": true,
"type": "telegram",
// token loaded from .security.yml
}
}
}How it works:
- Values from
.security.ymlare automatically mapped to config fields - No special syntax needed — just omit sensitive fields from config.json
- If a field exists in both files,
.security.ymlvalue takes precedence - You can mix direct values in config.json with security values
For complete documentation, see ../security/security_configuration.md.
| Vendor | provider Value |
Default API Base | Protocol | API Key |
|---|---|---|---|---|
| OpenAI | openai |
https://api.openai.com/v1 |
OpenAI | Get Key |
| Anthropic | anthropic |
https://api.anthropic.com/v1 |
Anthropic | Get Key |
| 智谱 AI (GLM) | zhipu |
https://open.bigmodel.cn/api/paas/v4 |
OpenAI | Get Key |
| DeepSeek | deepseek |
https://api.deepseek.com/v1 |
OpenAI | Get Key |
| Google Gemini | gemini |
https://generativelanguage.googleapis.com/v1beta |
Gemini | Get Key |
| Groq | groq |
https://api.groq.com/openai/v1 |
OpenAI | Get Key |
| Moonshot | moonshot |
https://api.moonshot.cn/v1 |
OpenAI | Get Key |
| 通义千问 (Qwen) | qwen |
https://dashscope.aliyuncs.com/compatible-mode/v1 |
OpenAI | Get Key |
| NVIDIA | nvidia |
https://integrate.api.nvidia.com/v1 |
OpenAI | Get Key |
| Ollama | ollama |
http://localhost:11434/v1 |
OpenAI | Local (no key needed) |
| LM Studio | lmstudio |
http://localhost:1234/v1 |
OpenAI | Optional (local default: no key) |
| OpenRouter | openrouter |
https://openrouter.ai/api/v1 |
OpenAI | Get Key |
| LiteLLM Proxy | litellm |
http://localhost:4000/v1 |
OpenAI | Your LiteLLM proxy key |
| VLLM | vllm |
http://localhost:8000/v1 |
OpenAI | Local |
| Cerebras | cerebras |
https://api.cerebras.ai/v1 |
OpenAI | Get Key |
| VolcEngine (Doubao) | volcengine |
https://ark.cn-beijing.volces.com/api/v3 |
OpenAI | Get Key |
| 神算云 | shengsuanyun |
https://router.shengsuanyun.com/api/v1 |
OpenAI | — |
| BytePlus | byteplus |
https://ark.ap-southeast.bytepluses.com/api/v3 |
OpenAI | Get Key |
| Vivgrid | vivgrid |
https://api.vivgrid.com/v1 |
OpenAI | Get Key |
| LongCat | longcat |
https://api.longcat.chat/openai |
OpenAI | Get Key |
| ModelScope (魔搭) | modelscope |
https://api-inference.modelscope.cn/v1 |
OpenAI | Get Token |
| Antigravity | antigravity |
Google Cloud | Custom | OAuth only |
| GitHub Copilot | github-copilot |
localhost:4321 |
gRPC | — |
{
"model_list": [
{
"model_name": "ark-code-latest",
"provider": "volcengine",
"model": "ark-code-latest",
"api_keys": ["sk-your-api-key"]
},
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_keys": ["sk-your-openai-key"]
},
{
"model_name": "claude-sonnet-4.6",
"provider": "anthropic",
"model": "claude-sonnet-4.6",
"api_keys": ["sk-ant-your-key"]
},
{
"model_name": "glm-4.7",
"provider": "zhipu",
"model": "glm-4.7",
"api_keys": ["your-zhipu-key"]
}
],
"agents": {
"defaults": {
"model": "gpt-5.4"
}
}
}Security Note: You can remove
api_keysfields from your config and store them in.security.ymlinstead. See Security Configuration above for details.Note: The
enabledfield can be set tofalseto disable a model entry without removing it. When omitted, it defaults totrueduring migration for models that have API keys.
Resolution rules:
- Prefer explicit
"provider": "openai", "model": "gpt-5.4". - If
provideris set, PicoClaw sendsmodelunchanged. - If
provideris omitted, PicoClaw treats the first/segment inmodelas the provider and everything after that first/as the runtime model ID. - This means
"model": "openrouter/openai/gpt-5.4"still works as a compatibility form and sendsopenai/gpt-5.4to OpenRouter.
Tip: You can omit
api_keyfields and store them in.security.ymlfor better security. See Security Configuration.
OpenAI
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4"
// api_key: set in .security.yml
}VolcEngine (Doubao)
{
"model_name": "ark-code-latest",
"provider": "volcengine",
"model": "ark-code-latest"
// api_key: set in .security.yml
}智谱 AI (GLM)
{
"model_name": "glm-4.7",
"provider": "zhipu",
"model": "glm-4.7"
// api_key: set in .security.yml
}DeepSeek
{
"model_name": "deepseek-chat",
"provider": "deepseek",
"model": "deepseek-chat"
// api_key: set in .security.yml
}Anthropic
{
"model_name": "claude-sonnet-4.6",
"provider": "anthropic",
"model": "claude-sonnet-4.6"
// api_key: set in .security.yml
}Run
picoclaw auth login --provider anthropicto paste your API token.
For direct Anthropic API access or custom endpoints that only support Anthropic's native message format:
{
"model_name": "claude-opus-4-6",
"provider": "anthropic-messages",
"model": "claude-opus-4-6",
"api_keys": ["sk-ant-your-key"],
"api_base": "https://api.anthropic.com"
}Use
anthropic-messageswhen the endpoint requires Anthropic's native/v1/messagesformat instead of OpenAI-compatible/v1/chat/completions.
Ollama (local)
{
"model_name": "llama3",
"provider": "ollama",
"model": "llama3"
}LM Studio (local)
{
"model_name": "lmstudio-local",
"provider": "lmstudio",
"model": "openai/gpt-oss-20b"
}api_base defaults to http://localhost:1234/v1. API key is optional unless your LM Studio server enables authentication.
With explicit provider, PicoClaw sends openai/gpt-oss-20b unchanged to LM Studio. The legacy compatibility form "model": "lmstudio/openai/gpt-oss-20b" still resolves to the same upstream model ID when provider is omitted.
Custom Proxy / LiteLLM
{
"model_name": "my-custom-model",
"provider": "openai",
"model": "custom-model",
"api_base": "https://my-proxy.com/v1"
// api_key: set in .security.yml
}With explicit provider, PicoClaw sends model unchanged. That means "provider": "litellm", "model": "lite-gpt4" sends lite-gpt4, while "provider": "litellm", "model": "openai/gpt-4o" sends openai/gpt-4o. The legacy compatibility forms litellm/lite-gpt4 and litellm/openai/gpt-4o still resolve the same way when provider is omitted.
Configure multiple endpoints for the same model name — PicoClaw will automatically round-robin between them:
Option 1: Multiple API Keys in .security.yml (Recommended)
# .security.yml
model_list:
gpt-5.4:
api_keys:
- "sk-proj-key-1"
- "sk-proj-key-2"// config.json
{
"model_list": [
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_base": "https://api.openai.com/v1"
// api_keys loaded from .security.yml
}
]
}Option 2: Multiple Model Entries
{
"model_list": [
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_base": "https://api1.example.com/v1",
"api_keys": ["sk-key1"]
},
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_base": "https://api2.example.com/v1",
"api_keys": ["sk-key2"]
}
]
}The old providers configuration is deprecated and has been removed in V2. Existing V0/V1 configs are auto-migrated. See docs/migration/model-list-migration.md for the full guide.
PicoClaw routes providers by protocol family:
- OpenAI-compatible: OpenRouter, Groq, Zhipu, vLLM-style endpoints, and most others.
- Gemini native: Google Gemini via the native
models/*:generateContentandmodels/*:streamGenerateContentendpoints. - Anthropic: Claude-native API behavior.
- Codex/OAuth: OpenAI OAuth/token authentication route.
This keeps the runtime lightweight while making new OpenAI-compatible backends mostly a config operation (api_base + api_keys).
Zhipu (legacy providers format)
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"model": "glm-4.7",
"max_tokens": 8192,
"temperature": 0.7,
"max_tool_iterations": 20,
"max_parallel_turns": 1
}
},
"providers": {
"zhipu": {
"api_key": "Your API Key",
"api_base": "https://open.bigmodel.cn/api/paas/v4"
}
}
}Note: The
providersformat is deprecated. Use the newmodel_listformat with.security.ymlfor better security.
max_parallel_turns: Controls concurrent processing of messages from different sessions.1(default) = sequential;>1= parallel. Messages from the same session are always serialized. See Steering docs for details.
Full config example
{
"agents": {
"defaults": {
"model_name": "claude-opus-4-5"
}
},
"session": {
"dm_scope": "per-channel-peer",
"backlog_limit": 20
},
"channel_list": {
"telegram": {
"enabled": true,
"type": "telegram",
// token: set in .security.yml
"allow_from": ["123456789"]
}
},
"tools": {
"web": {
"duckduckgo": {
"enabled": true,
"max_results": 5
}
}
},
"heartbeat": {
"enabled": true,
"interval": 30
}
}Note: Sensitive fields (
api_key,token, etc.) can be omitted and stored in.security.ymlfor better security.
PicoClaw supports cron-style scheduled tasks via the cron tool. The agent can set, list, and cancel reminders or recurring jobs that trigger at specified times.
{
"tools": {
"cron": {
"enabled": true,
"exec_timeout_minutes": 5
}
}
}Scheduled tasks persist across restarts and are stored in ~/.picoclaw/workspace/cron/.
| Topic | Description |
|---|---|
| Security Configuration | Store API keys and secrets in separate .security.yml file |
| Sensitive Data Filtering | Filter API keys and tokens from tool results before sending to LLM |
| Hook System | Event-driven hooks: observers, interceptors, approval hooks |
| Steering | Inject messages into a running agent loop between tool calls |
| SubTurn | Subagent coordination, concurrency control, lifecycle |
| Context Management | Context boundary detection, proactive budget check, compression |