| title | Model Providers |
|---|---|
| sidebarTitle | Model Providers |
| description | Supported AI model providers and how to configure them. |
Milady supports 19 model providers out of the box. Providers are loaded as plugins and can be auto-enabled by setting the right environment variable, or explicitly configured in milady.json.
The table below is the complete list of supported providers, sourced from the auto-enable system in Milady's core.
Providers marked **bundled** ship with every Milady install. Others are installed on demand the first time they are needed — no manual plugin install required.| Provider | Plugin Package | Env Variable(s) | Bundled | Notes |
|---|---|---|---|---|
| Anthropic | @elizaos/plugin-anthropic |
ANTHROPIC_API_KEY or CLAUDE_API_KEY |
No | Recommended. Claude models (Opus, Sonnet, Haiku). |
| OpenAI | @elizaos/plugin-openai |
OPENAI_API_KEY |
Yes | GPT-4o, o1, o3, GPT-4.1. |
| Google Gemini | @elizaos/plugin-google-genai |
GOOGLE_API_KEY or GOOGLE_GENERATIVE_AI_API_KEY |
No | Gemini Pro, Flash, Ultra. |
| Google Antigravity | @elizaos/plugin-google-antigravity |
GOOGLE_CLOUD_API_KEY |
No | Google Cloud / Vertex AI models. |
| Vercel AI Gateway | @elizaos/plugin-vercel-ai-gateway |
AI_GATEWAY_API_KEY or AIGATEWAY_API_KEY |
No | Unified gateway to multiple providers. |
| OpenRouter | @elizaos/plugin-openrouter |
OPENROUTER_API_KEY |
Yes | 100+ models behind one API key. |
| Groq | @elizaos/plugin-groq |
GROQ_API_KEY |
Yes | Ultra-fast inference (LPU). |
| xAI | @elizaos/plugin-xai |
XAI_API_KEY or GROK_API_KEY |
No | Grok models. |
| DeepSeek | @elizaos/plugin-deepseek |
DEEPSEEK_API_KEY |
No | Reasoning and code models. |
| Ollama | @elizaos/plugin-ollama |
OLLAMA_BASE_URL |
Yes | Local models. No API key needed. |
| Qwen | @elizaos/plugin-qwen |
— | No | Alibaba's Qwen models. Configure via plugin entry. |
| MiniMax | @elizaos/plugin-minimax |
— | No | MiniMax language models. Configure via plugin entry. |
| Together AI | @elizaos/plugin-together |
TOGETHER_API_KEY |
No | Open-source model hosting. |
| Mistral | @elizaos/plugin-mistral |
MISTRAL_API_KEY |
No | Mistral and Mixtral models. |
| Cohere | @elizaos/plugin-cohere |
COHERE_API_KEY |
No | Command R+ and embed models. |
| Perplexity | @elizaos/plugin-perplexity |
PERPLEXITY_API_KEY |
No | Search-augmented generation. |
| Zai | @homunculuslabs/plugin-zai |
ZAI_API_KEY |
No | Homunculus Labs Zai models. |
| Pi AI | @elizaos/plugin-pi-ai |
ELIZA_USE_PI_AI |
No | Inflection Pi conversational models. |
Milady uses an auto-enable system. You do not need to manually register provider plugins — just set the API key and Milady handles the rest.
- On startup, Milady scans your environment variables (from
~/.milady/.env, shell environment, ormilady.json). - If a recognized API key is found (e.g.,
ANTHROPIC_API_KEY), the corresponding provider plugin is automatically added to the plugin allowlist. - The plugin is installed on demand if not already present (installed to
~/.milady/plugins/installed/). - The provider becomes available for model selection.
You can also enable providers manually in ~/.milady/milady.json under the plugins key:
{
plugins: {
allow: ["anthropic", "openai", "ollama"],
},
}To disable an auto-enabled provider, set its entry to enabled: false:
{
plugins: {
entries: {
anthropic: { enabled: false },
},
},
}Providers can also be activated through auth profiles in your config:
{
auth: {
profiles: {
main: {
provider: "anthropic",
},
backup: {
provider: "openrouter",
},
},
},
}Create or edit ~/.milady/.env:
# Primary provider
ANTHROPIC_API_KEY=sk-ant-api03-...
# Additional providers
OPENAI_API_KEY=sk-...
OPENROUTER_API_KEY=sk-or-v1-...
GROQ_API_KEY=gsk_...Add keys directly in ~/.milady/milady.json:
{
env: {
ANTHROPIC_API_KEY: "sk-ant-api03-...",
OPENAI_API_KEY: "sk-...",
},
}milady configureThis walks you through setting common environment variables including your preferred model provider.
When using the config file approach, your API keys are stored in plaintext in `milady.json`. The `.env` file approach keeps secrets separate from configuration and is easier to exclude from version control.milady models # list configured model providers and their status
milady models add # add a new provider interactively
milady models test # test if your API keys are valid
milady configure # interactive config wizard (includes provider setup)In ~/.milady/milady.json, specify the model using provider/model format:
{
agents: {
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-20250514",
},
},
},
}Or switch mid-session using the /model chat command:
/model openai/gpt-4o
Milady supports an ordered fallback chain. If the primary model fails (rate limit, outage, billing issue), the next model in the list is tried automatically.
{
agents: {
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-20250514",
fallbacks: [
"openai/gpt-4o",
"groq/llama-3.3-70b-versatile",
],
},
},
},
}Fallbacks are tried in order. Each provider in the fallback chain must have its API key configured.
You can have multiple providers active simultaneously. Every provider whose API key is detected will be auto-enabled and available for selection.
A common setup:
# ~/.milady/.env
# Primary — high-quality reasoning
ANTHROPIC_API_KEY=sk-ant-api03-...
# Fast inference for simple tasks
GROQ_API_KEY=gsk_...
# Fallback — wide model selection
OPENROUTER_API_KEY=sk-or-v1-...
# Local — offline / privacy-sensitive work
OLLAMA_BASE_URL=http://127.0.0.1:11434With this setup, all four providers are available. You can set different models for different purposes:
{
agents: {
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-20250514",
fallbacks: ["openrouter/anthropic/claude-sonnet-4-20250514"],
},
imageModel: {
primary: "openai/gpt-4o",
},
},
},
}Ollama lets you run models locally with no API key and full privacy. It is the only provider that does not require an API key — just a running Ollama server.
```bash curl -fsSL https://ollama.ai/install.sh | sh ``` ```bash ollama pull llama3.3 ``` Add to `~/.milady/.env`: ```bash OLLAMA_BASE_URL=http://127.0.0.1:11434 ``` In `~/.milady/milady.json`: ```json5 { agents: { defaults: { model: { primary: "ollama/llama3.3", }, }, }, } ```Ollama auto-enables as soon as OLLAMA_BASE_URL is set. If you are running Ollama on the default port locally, just set:
OLLAMA_BASE_URL=http://127.0.0.1:11434For remote Ollama instances, point to your server's address instead.
Every env variable that triggers auto-enable, grouped by provider:
| Env Variable | Provider Activated |
|---|---|
ANTHROPIC_API_KEY |
Anthropic |
CLAUDE_API_KEY |
Anthropic |
OPENAI_API_KEY |
OpenAI |
GOOGLE_API_KEY |
Google Gemini |
GOOGLE_GENERATIVE_AI_API_KEY |
Google Gemini |
GOOGLE_CLOUD_API_KEY |
Google Antigravity (Vertex AI) |
AI_GATEWAY_API_KEY |
Vercel AI Gateway |
AIGATEWAY_API_KEY |
Vercel AI Gateway |
GROQ_API_KEY |
Groq |
XAI_API_KEY |
xAI |
GROK_API_KEY |
xAI |
OPENROUTER_API_KEY |
OpenRouter |
OLLAMA_BASE_URL |
Ollama |
DEEPSEEK_API_KEY |
DeepSeek |
TOGETHER_API_KEY |
Together AI |
MISTRAL_API_KEY |
Mistral |
COHERE_API_KEY |
Cohere |
PERPLEXITY_API_KEY |
Perplexity |
ZAI_API_KEY |
Zai |
ELIZAOS_CLOUD_API_KEY |
elizaOS Cloud |
ELIZAOS_CLOUD_ENABLED |
elizaOS Cloud |