Your universal API proxy β one endpoint, 36+ providers, zero downtime.
Chat Completions β’ Embeddings β’ Image Generation β’ Audio β’ Reranking β’ 100% TypeScript
Connect any AI-powered IDE or CLI tool through OmniRoute β free API gateway for unlimited coding.
|
OpenClaw β 205K |
NanoBot β 20.9K |
PicoClaw β 14.6K |
ZeroClaw β 9.9K |
IronClaw β 2.1K |
|
OpenCode β 106K |
Codex CLI β 60.8K |
Claude Code β 67.3K |
Gemini CLI β 94.7K |
Kilo Code β 15.5K |
π‘ All agents connect via http://localhost:20128/v1 or http://cloud.omniroute.online/v1 β one config, unlimited models and quota
π Website β’ π Quick Start β’ π‘ Features β’ π Docs β’ π° Pricing
π Available in: English | PortuguΓͺs | EspaΓ±ol | Π ΡΡΡΠΊΠΈΠΉ | δΈζ | Deutsch | FranΓ§ais | Italiano
Stop wasting money and hitting limits:
Subscription quota expires unused every month
Rate limits stop you mid-coding
Expensive APIs ($20-50/month per provider)
Manual switching between providers
OmniRoute solves this:
- β Maximize subscriptions - Track quota, use every bit before reset
- β Auto fallback - Subscription β API Key β Cheap β Free, zero downtime
- β Multi-account - Round-robin between accounts per provider
- β Universal - Works with Claude Code, Codex, Gemini CLI, Cursor, Cline, OpenClaw, any CLI tool
βββββββββββββββ
β Your CLI β (Claude Code, Codex, Gemini CLI, OpenClaw, Cursor, Cline...)
β Tool β
ββββββββ¬βββββββ
β http://localhost:20128/v1
β
βββββββββββββββββββββββββββββββββββββββββββ
β OmniRoute (Smart Router) β
β β’ Format translation (OpenAI β Claude) β
β β’ Quota tracking + Embeddings + Images β
β β’ Auto token refresh β
ββββββββ¬βββββββββββββββββββββββββββββββββββ
β
βββ [Tier 1: SUBSCRIPTION] Claude Code, Codex, Gemini CLI
β β quota exhausted
βββ [Tier 2: API KEY] DeepSeek, Groq, xAI, Mistral, NVIDIA NIM, etc.
β β budget limit
βββ [Tier 3: CHEAP] GLM ($0.6/1M), MiniMax ($0.2/1M)
β β budget limit
βββ [Tier 4: FREE] iFlow, Qwen, Kiro (unlimited)
Result: Never stop coding, minimal cost
1. Install globally:
npm install -g omniroute
omnirouteπ Dashboard opens at http://localhost:20128
| Command | Description |
|---|---|
omniroute |
Start server (default port 20128) |
omniroute --port 3000 |
Use custom port |
omniroute --no-open |
Don't auto-open browser |
omniroute --help |
Show help |
2. Connect a FREE provider:
Dashboard β Providers β Connect Claude Code or Antigravity β OAuth login β Done!
3. Use in your CLI tool:
Claude Code/Codex/Gemini CLI/OpenClaw/Cursor/Cline Settings:
Endpoint: http://localhost:20128/v1
API Key: [copy from dashboard]
Model: if/kimi-k2-thinking
That's it! Start coding with FREE AI models.
Alternative β run from source:
cp .env.example .env
npm install
PORT=20128 NEXT_PUBLIC_BASE_URL=http://localhost:20128 npm run devOmniRoute is available as a public Docker image on Docker Hub.
Quick run:
docker run -d \
--name omniroute \
--restart unless-stopped \
-p 20128:20128 \
-v omniroute-data:/app/data \
diegosouzapw/omniroute:latestWith environment file:
# Copy and edit .env first
cp .env.example .env
docker run -d \
--name omniroute \
--restart unless-stopped \
--env-file .env \
-p 20128:20128 \
-v omniroute-data:/app/data \
diegosouzapw/omniroute:latestUsing Docker Compose:
# Base profile (no CLI tools)
docker compose --profile base up -d
# CLI profile (Claude Code, Codex, OpenClaw built-in)
docker compose --profile cli up -d| Image | Tag | Size | Description |
|---|---|---|---|
diegosouzapw/omniroute |
latest |
~250MB | Latest stable release |
diegosouzapw/omniroute |
1.0.2 |
~250MB | Current version |
| Tier | Provider | Cost | Quota Reset | Best For |
|---|---|---|---|---|
| π³ SUBSCRIPTION | Claude Code (Pro) | $20/mo | 5h + weekly | Already subscribed |
| Codex (Plus/Pro) | $20-200/mo | 5h + weekly | OpenAI users | |
| Gemini CLI | FREE | 180K/mo + 1K/day | Everyone! | |
| GitHub Copilot | $10-19/mo | Monthly | GitHub users | |
| π API KEY | NVIDIA NIM | FREE (1000 credits) | One-time | Free tier testing |
| DeepSeek | Pay-per-use | None | Best price/quality | |
| Groq | Free tier + paid | Rate limited | Ultra-fast inference | |
| xAI (Grok) | Pay-per-use | None | Grok models | |
| Mistral | Free tier + paid | Rate limited | European AI | |
| OpenRouter | Pay-per-use | None | 100+ models | |
| π° CHEAP | GLM-4.7 | $0.6/1M | Daily 10AM | Budget backup |
| MiniMax M2.1 | $0.2/1M | 5-hour rolling | Cheapest option | |
| Kimi K2 | $9/mo flat | 10M tokens/mo | Predictable cost | |
| π FREE | iFlow | $0 | Unlimited | 8 models free |
| Qwen | $0 | Unlimited | 3 models free | |
| Kiro | $0 | Unlimited | Claude free |
π‘ Pro Tip: Start with Gemini CLI (180K free/month) + iFlow (unlimited free) combo = $0 cost!
Problem: Quota expires unused, rate limits during heavy coding
Combo: "maximize-claude"
1. cc/claude-opus-4-6 (use subscription fully)
2. glm/glm-4.7 (cheap backup when quota out)
3. if/kimi-k2-thinking (free emergency fallback)
Monthly cost: $20 (subscription) + ~$5 (backup) = $25 total
vs. $20 + hitting limits = frustration
Problem: Can't afford subscriptions, need reliable AI coding
Combo: "free-forever"
1. gc/gemini-3-flash (180K free/month)
2. if/kimi-k2-thinking (unlimited free)
3. qw/qwen3-coder-plus (unlimited free)
Monthly cost: $0
Quality: Production-ready models
Problem: Deadlines, can't afford downtime
Combo: "always-on"
1. cc/claude-opus-4-6 (best quality)
2. cx/gpt-5.2-codex (second subscription)
3. glm/glm-4.7 (cheap, resets daily)
4. minimax/MiniMax-M2.1 (cheapest, 5h reset)
5. if/kimi-k2-thinking (free unlimited)
Result: 5 layers of fallback = zero downtime
Problem: Need AI assistant in messaging apps, completely free
Combo: "openclaw-free"
1. if/glm-4.7 (unlimited free)
2. if/minimax-m2.1 (unlimited free)
3. if/kimi-k2-thinking (unlimited free)
Monthly cost: $0
Access via: WhatsApp, Telegram, Slack, Discord, iMessage, Signal...
| Feature | What It Does |
|---|---|
| π― Smart 4-Tier Fallback | Auto-route: Subscription β API Key β Cheap β Free |
| π Real-Time Quota Tracking | Live token count + reset countdown per provider |
| π Format Translation | OpenAI β Claude β Gemini β Cursor β Kiro seamless + response sanitization |
| π₯ Multi-Account Support | Multiple accounts per provider with intelligent selection |
| π Auto Token Refresh | OAuth tokens refresh automatically with retry |
| π¨ Custom Combos | 6 strategies: fill-first, round-robin, p2c, random, least-used, cost-optimized |
| π§© Custom Models | Add any model ID to any provider |
| π Wildcard Router | Route provider/* patterns to any provider dynamically |
| π§ Thinking Budget | Passthrough, auto, custom, and adaptive modes for reasoning models |
| π¬ System Prompt Injection | Global system prompt applied across all requests |
| π Responses API | Full OpenAI Responses API (/v1/responses) support for Codex |
| Feature | What It Does |
|---|---|
| πΌοΈ Image Generation | /v1/images/generations β 4 providers, 9+ models |
| π Embeddings | /v1/embeddings β 6 providers, 9+ models |
| π€ Audio Transcription | /v1/audio/transcriptions β Whisper-compatible |
| π Text-to-Speech | /v1/audio/speech β Multi-provider audio synthesis |
| π‘οΈ Moderations | /v1/moderations β Content safety checks |
| π Reranking | /v1/rerank β Document relevance reranking |
| Feature | What It Does |
|---|---|
| π Circuit Breaker | Auto-open/close per-provider with configurable thresholds |
| π‘οΈ Anti-Thundering Herd | Mutex + semaphore rate-limit for API key providers |
| π§ Semantic Cache | Two-tier cache (signature + semantic) reduces cost & latency |
| β‘ Request Idempotency | 5s dedup window for duplicate requests |
| π TLS Fingerprint Spoofing | Bypass TLS-based bot detection via wreq-js |
| π IP Filtering | Allowlist/blocklist for API access control |
| π Editable Rate Limits | Configurable RPM, min gap, and max concurrent at system level |
| Feature | What It Does |
|---|---|
| π Request Logging | Debug mode with full request/response logs |
| πΎ SQLite Proxy Logs | Persistent proxy logs survive server restarts |
| π Analytics Dashboard | Recharts-powered: stat cards, model usage chart, provider table |
| π Progress Tracking | Opt-in SSE progress events for streaming |
| π§ͺ LLM Evaluations | Golden set testing with 4 match strategies |
| π Request Telemetry | p50/p95/p99 latency aggregation + X-Request-Id tracing |
| π Request Logs + Quotas | Dedicated pages for log browsing and limits/quotas tracking |
| π₯ Health Dashboard | System uptime, circuit breaker states, lockouts, cache stats |
| π° Cost Tracking | Budget management + per-model pricing configuration |
| Feature | What It Does |
|---|---|
| πΎ Cloud Sync | Sync config across devices via Cloudflare Workers |
| π Deploy Anywhere | Localhost, VPS, Docker, Cloudflare Workers |
| π API Key Management | Generate, rotate, and scope API keys per provider |
| π§ Onboarding Wizard | 4-step guided setup for first-time users |
| π§ CLI Tools Dashboard | One-click configure Claude, Codex, Cline, OpenClaw, Kilo, Antigravity |
| π DB Backups | Automatic backup, restore, export & import for all settings |
π Feature Details
Create combos with automatic fallback:
Combo: "my-coding-stack"
1. cc/claude-opus-4-6 (your subscription)
2. nvidia/llama-3.3-70b (free NVIDIA API)
3. glm/glm-4.7 (cheap backup, $0.6/1M)
4. if/kimi-k2-thinking (free fallback)
β Auto switches when quota runs out or errors occur
- Token consumption per provider
- Reset countdown (5-hour, daily, weekly)
- Cost estimation for paid tiers
- Monthly spending reports
Seamless translation between formats:
- OpenAI β Claude β Gemini β OpenAI Responses
- Your CLI tool sends OpenAI format β OmniRoute translates β Provider receives native format
- Works with any tool that supports custom OpenAI endpoints
- Response sanitization β Strips non-standard fields for strict OpenAI SDK compatibility
- Role normalization β
developerβsystemfor non-OpenAI;systemβuserfor GLM/ERNIE models - Think tag extraction β
<think>blocks βreasoning_contentfor thinking models - Structured output β
json_schemaβ Gemini'sresponseMimeType/responseSchema
- Add multiple accounts per provider
- Auto round-robin or priority-based routing
- Fallback to next account when one hits quota
- OAuth tokens automatically refresh before expiration
- No manual re-authentication needed
- Seamless experience across all providers
- Create unlimited model combinations
- 6 strategies: fill-first, round-robin, power-of-two-choices, random, least-used, cost-optimized
- Share combos across devices with Cloud Sync
- System status (uptime, version, memory usage)
- Circuit breaker states per provider (Closed/Open/Half-Open)
- Rate limit status and active lockouts
- Signature cache statistics
- Latency telemetry (p50/p95/p99) + prompt cache
- Reset health status with one click
OmniRoute includes a powerful built-in Translator Playground with 4 modes for debugging, testing, and monitoring API translations:
| Mode | Description |
|---|---|
| π» Playground | Direct format translation β paste any API request body and instantly see how OmniRoute translates it between provider formats (OpenAI β Claude β Gemini β Responses API). Includes example templates and format auto-detection. |
| π¬ Chat Tester | Send real chat requests through OmniRoute and see the full round-trip: your input, the translated request, the provider response, and the translated response back. Invaluable for validating combo routing. |
| π§ͺ Test Bench | Batch testing mode β define multiple test cases with different inputs and expected outputs, run them all at once, and compare results across providers and models. |
| π± Live Monitor | Real-time request monitoring β watch incoming requests as they flow through OmniRoute, see format translations happening live, and identify issues instantly. |
Access: Dashboard β Translator (sidebar)
- Sync providers, combos, and settings across devices
- Automatic background sync
- Secure encrypted storage
π³ Subscription Providers
Dashboard β Providers β Connect Claude Code
β OAuth login β Auto token refresh
β 5-hour + weekly quota tracking
Models:
cc/claude-opus-4-6
cc/claude-sonnet-4-5-20250929
cc/claude-haiku-4-5-20251001Pro Tip: Use Opus for complex tasks, Sonnet for speed. OmniRoute tracks quota per model!
Dashboard β Providers β Connect Codex
β OAuth login (port 1455)
β 5-hour + weekly reset
Models:
cx/gpt-5.2-codex
cx/gpt-5.1-codex-maxDashboard β Providers β Connect Gemini CLI
β Google OAuth
β 180K completions/month + 1K/day
Models:
gc/gemini-3-flash-preview
gc/gemini-2.5-proBest Value: Huge free tier! Use this before paid tiers.
Dashboard β Providers β Connect GitHub
β OAuth via GitHub
β Monthly reset (1st of month)
Models:
gh/gpt-5
gh/claude-4.5-sonnet
gh/gemini-3-proπ API Key Providers
- Sign up: build.nvidia.com
- Get free API key (1000 inference credits included)
- Dashboard β Add Provider β NVIDIA NIM:
- API Key:
nvapi-your-key
- API Key:
Models: nvidia/llama-3.3-70b-instruct, nvidia/mistral-7b-instruct, and 50+ more
Pro Tip: OpenAI-compatible API β works seamlessly with OmniRoute's format translation!
- Sign up: platform.deepseek.com
- Get API key
- Dashboard β Add Provider β DeepSeek
Models: deepseek/deepseek-chat, deepseek/deepseek-coder
- Sign up: console.groq.com
- Get API key (free tier included)
- Dashboard β Add Provider β Groq
Models: groq/llama-3.3-70b, groq/mixtral-8x7b
Pro Tip: Ultra-fast inference β best for real-time coding!
- Sign up: openrouter.ai
- Get API key
- Dashboard β Add Provider β OpenRouter
Models: Access 100+ models from all major providers through a single API key.
π° Cheap Providers (Backup)
- Sign up: Zhipu AI
- Get API key from Coding Plan
- Dashboard β Add API Key:
- Provider:
glm - API Key:
your-key
- Provider:
Use: glm/glm-4.7
Pro Tip: Coding Plan offers 3Γ quota at 1/7 cost! Reset daily 10:00 AM.
- Sign up: MiniMax
- Get API key
- Dashboard β Add API Key
Use: minimax/MiniMax-M2.1
Pro Tip: Cheapest option for long context (1M tokens)!
- Subscribe: Moonshot AI
- Get API key
- Dashboard β Add API Key
Use: kimi/kimi-latest
Pro Tip: Fixed $9/month for 10M tokens = $0.90/1M effective cost!
π FREE Providers (Emergency Backup)
Dashboard β Connect iFlow
β iFlow OAuth login
β Unlimited usage
Models:
if/kimi-k2-thinking
if/qwen3-coder-plus
if/glm-4.7
if/minimax-m2
if/deepseek-r1Dashboard β Connect Qwen
β Device code authorization
β Unlimited usage
Models:
qw/qwen3-coder-plus
qw/qwen3-coder-flashDashboard β Connect Kiro
β AWS Builder ID or Google/GitHub
β Unlimited usage
Models:
kr/claude-sonnet-4.5
kr/claude-haiku-4.5π¨ Create Combos
Dashboard β Combos β Create New
Name: premium-coding
Models:
1. cc/claude-opus-4-6 (Subscription primary)
2. glm/glm-4.7 (Cheap backup, $0.6/1M)
3. minimax/MiniMax-M2.1 (Cheapest fallback, $0.20/1M)
Use in CLI: premium-coding
Name: free-combo
Models:
1. gc/gemini-3-flash-preview (180K free/month)
2. if/kimi-k2-thinking (unlimited)
3. qw/qwen3-coder-plus (unlimited)
Cost: $0 forever!
π§ CLI Integration
Settings β Models β Advanced:
OpenAI API Base URL: http://localhost:20128/v1
OpenAI API Key: [from OmniRoute dashboard]
Model: cc/claude-opus-4-6
Use the CLI Tools page in the dashboard for one-click configuration, or edit ~/.claude/settings.json manually.
export OPENAI_BASE_URL="http://localhost:20128"
export OPENAI_API_KEY="your-omniroute-api-key"
codex "your prompt"Option 1 β Dashboard (recommended):
Dashboard β CLI Tools β OpenClaw β Select Model β Apply
Option 2 β Manual: Edit ~/.openclaw/openclaw.json:
{
"models": {
"providers": {
"omniroute": {
"baseUrl": "http://127.0.0.1:20128/v1",
"apiKey": "sk_omniroute",
"api": "openai-completions"
}
}
}
}Note: OpenClaw only works with local OmniRoute. Use
127.0.0.1instead oflocalhostto avoid IPv6 resolution issues.
Settings β API Configuration:
Provider: OpenAI Compatible
Base URL: http://localhost:20128/v1
API Key: [from OmniRoute dashboard]
Model: if/kimi-k2-thinking
View all available models
Claude Code (cc/) - Pro/Max:
cc/claude-opus-4-6cc/claude-sonnet-4-5-20250929cc/claude-haiku-4-5-20251001
Codex (cx/) - Plus/Pro:
cx/gpt-5.2-codexcx/gpt-5.1-codex-max
Gemini CLI (gc/) - FREE:
gc/gemini-3-flash-previewgc/gemini-2.5-pro
GitHub Copilot (gh/):
gh/gpt-5gh/claude-4.5-sonnet
NVIDIA NIM (nvidia/) - FREE credits:
nvidia/llama-3.3-70b-instructnvidia/mistral-7b-instruct- 50+ more models on build.nvidia.com
GLM (glm/) - $0.6/1M:
glm/glm-4.7
MiniMax (minimax/) - $0.2/1M:
minimax/MiniMax-M2.1
iFlow (if/) - FREE:
if/kimi-k2-thinkingif/qwen3-coder-plusif/deepseek-r1if/glm-4.7if/minimax-m2
Qwen (qw/) - FREE:
qw/qwen3-coder-plusqw/qwen3-coder-flash
Kiro (kr/) - FREE:
kr/claude-sonnet-4.5kr/claude-haiku-4.5
OpenRouter (or/) - 100+ models:
or/anthropic/claude-4-sonnetor/google/gemini-2.5-pro- Any model from openrouter.ai/models
OmniRoute includes a built-in evaluation framework to test LLM response quality against a golden set. Access it via Analytics β Evals in the dashboard.
The pre-loaded "OmniRoute Golden Set" contains 10 test cases covering:
- Greetings, math, geography, code generation
- JSON format compliance, translation, markdown
- Safety refusal (harmful content), counting, boolean logic
| Strategy | Description | Example |
|---|---|---|
exact |
Output must match exactly | "4" |
contains |
Output must contain substring (case-insensitive) | "Paris" |
regex |
Output must match regex pattern | "1.*2.*3" |
custom |
Custom JS function returns true/false | (output) => output.length > 10 |
Click to expand troubleshooting guide
"Language model did not provide messages"
- Provider quota exhausted β Check dashboard quota tracker
- Solution: Use combo fallback or switch to cheaper tier
Rate limiting
- Subscription quota out β Fallback to GLM/MiniMax
- Add combo:
cc/claude-opus-4-6 β glm/glm-4.7 β if/kimi-k2-thinking
OAuth token expired
- Auto-refreshed by OmniRoute
- If issues persist: Dashboard β Provider β Reconnect
High costs
- Check usage stats in Dashboard β Costs
- Switch primary model to GLM/MiniMax
- Use free tier (Gemini CLI, iFlow) for non-critical tasks
Dashboard opens on wrong port
- Set
PORT=20128andNEXT_PUBLIC_BASE_URL=http://localhost:20128
Cloud sync errors
- Verify
BASE_URLpoints to your running instance - Verify
CLOUD_URLpoints to your expected cloud endpoint - Keep
NEXT_PUBLIC_*values aligned with server-side values
First login not working
- Check
INITIAL_PASSWORDin.env - If unset, fallback password is
123456
No request logs
- Set
ENABLE_REQUEST_LOGS=truein.env
Connection test shows "Invalid" for OpenAI-compatible providers
- Many providers don't expose a
/modelsendpoint - OmniRoute v1.0.2+ includes fallback validation via chat completions
- Ensure base URL includes
/v1suffix
- Runtime: Node.js 20+
- Language: TypeScript 5.9 β 100% TypeScript across
src/andopen-sse/(v1.0.2) - Framework: Next.js 16 + React 19 + Tailwind CSS 4
- Database: LowDB (JSON) + SQLite (domain state + proxy logs)
- Streaming: Server-Sent Events (SSE)
- Auth: OAuth 2.0 (PKCE) + JWT + API Keys
- Testing: Node.js test runner (368+ unit tests)
- CI/CD: GitHub Actions (auto npm publish + Docker Hub on release)
- Website: omniroute.online
- Package: npmjs.com/package/omniroute
- Docker: hub.docker.com/r/diegosouzapw/omniroute
- Resilience: Circuit breaker, exponential backoff, anti-thundering herd, TLS spoofing
| Document | Description |
|---|---|
| User Guide | Providers, combos, CLI integration, deployment |
| API Reference | All endpoints with examples |
| Troubleshooting | Common problems and solutions |
| Architecture | System architecture and internals |
| Contributing | Development setup and guidelines |
| OpenAPI Spec | OpenAPI 3.0 specification |
| Security Policy | Vulnerability reporting and security practices |
| VM Deployment | Complete guide: VM + nginx + Cloudflare setup |
| Features Gallery | Visual dashboard tour with screenshots |
Click to see dashboard screenshots
| Page | Screenshot |
|---|---|
| Providers | ![]() |
| Combos | ![]() |
| Analytics | ![]() |
| Health | ![]() |
| Translator | ![]() |
| Settings | ![]() |
| CLI Tools | ![]() |
| Usage Logs | ![]() |
| Endpoint | ![]() |
OmniRoute has 210+ features planned across multiple development phases. Here are the key areas:
| Category | Planned Features | Highlights |
|---|---|---|
| π§ Routing & Intelligence | 25+ | Lowest-latency routing, tag-based routing, quota preflight, P2C account selection |
| π Security & Compliance | 20+ | SSRF hardening, credential cloaking, rate-limit per endpoint, management key scoping |
| π Observability | 15+ | OpenTelemetry integration, real-time quota monitoring, cost tracking per model |
| π Provider Integrations | 20+ | Dynamic model registry, provider cooldowns, multi-account Codex, Copilot quota parsing |
| β‘ Performance | 15+ | Dual cache layer, prompt cache, response cache, streaming keepalive, batch API |
| π Ecosystem | 10+ | WebSocket API, config hot-reload, distributed config store, commercial mode |
- π OpenCode Integration β Native provider support for the OpenCode AI coding IDE
- π TRAE Integration β Full support for the TRAE AI development framework
- π¦ Batch API β Asynchronous batch processing for bulk requests
- π― Tag-Based Routing β Route requests based on custom tags and metadata
- π° Lowest-Cost Strategy β Automatically select the cheapest available provider
π Full feature specifications available in
docs/new-features/(217 detailed specs)
- Website: omniroute.online
- GitHub: github.com/diegosouzapw/OmniRoute
- Issues: github.com/diegosouzapw/OmniRoute/issues
- Original Project: 9router by decolua
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
# Create a release β npm publish happens automatically
gh release create v1.0.2 --title "v1.0.2" --generate-notesSpecial thanks to 9router by decolua β the original project that inspired this fork. OmniRoute builds upon that incredible foundation with additional features, multi-modal APIs, and a full TypeScript rewrite.
Special thanks to CLIProxyAPI β the original Go implementation that inspired this JavaScript port.
MIT License - see LICENSE for details.
omniroute.online








