feat: Ollama provider — local & cloud LLM runtime#2735
Conversation
Add a full Ollama provider driver integrated with the existing ProviderDriver pattern, enabling T3 Code to use local and cloud Ollama instances as a first-class provider alongside Codex, Claude, OpenCode, and Cursor. ## Backend - — HTTP helpers for Ollama REST API: /api/chat (streaming SSE), /api/tags (model listing), /api/version (health check). Supports Bearer token auth via OLLAMA_API_KEY env variable. - — 7 built-in tools: read_file, write_file, edit_file, bash, list_directory, search_files, web_fetch. Each tool has classify + execute phases matching the tool-calling protocol. - — Full provider adapter with session lifecycle, tool-calling loop, approval flow via Deferred, and fork support (runForkWith). Streams structured runtime events (turn.started, content.delta, item.completed, tool.use, tool.result, etc.). - — Health check provider: pings /api/tags at startup and on 5-minute refresh interval, exposes available models directly from the Ollama server. - — ProviderDriver<OllamaSettings, OllamaDriverEnv> bridging adapter + text generation + snapshot management into a single entry point. - — Commit message, PR content, branch name, and thread title generation via Ollama chat completions. - — OllamaDriver registered in BUILT_IN_DRIVERS array and BuiltInDriversEnv union type. ## Contracts - — OllamaSettings schema: enabled, baseUrl (default http://localhost:11434), model, customModels. Patch schema for runtime updates. - — Ollama driver kind registered in driver model contract for model selection routing. ## Frontend - — OllamaIcon SVG (simple-icons Ollama logo). - — Ollama entry in PROVIDER_CLIENT_DEFINITIONS with icon, label, and settings schema for the settings panel. - — Ollama added to PROVIDER_ICON_BY_PROVIDER mapping. - — Typecheck fix for the added model field. ## Usage Configure Ollama in Settings → Providers: - Server URL: http://localhost:11434 (local) or https://ollama.com - API key via OLLAMA_API_KEY environment variable (for Ollama Cloud) - Models are auto-discovered from /api/tags
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ApprovabilityVerdict: Needs human review This PR introduces a complete new Ollama provider with session management, tool execution (including shell commands and file operations), and LLM integration — substantial new user-facing capability that warrants human review. An unresolved medium-severity comment also identifies inconsistent model selection handling. You can customize Macroscope's approvability policy. Learn more. |
2d364b8 to
b4ede4e
Compare
…le, rollback indices ## What changed - **ollamaRuntime.ts**: `Effect.promise` → `Effect.tryPromise` (4 call sites). Network errors are now typed `OllamaRuntimeError` failures instead of unrecoverable defects. `TextDecoder` now flushes on the final chunk to prevent UTF-8 truncation at multi-byte boundaries. - **OllamaProvider.ts**: `checkOllamaProviderStatus` receives `processEnv` per-instance instead of reading `process.env.OLLAMA_API_KEY` globally. - **OllamaDriver.ts**: passes `processEnv` from `mergeProviderInstanceEnvironment` to `checkOllamaProviderStatus` (was missing before — provider status check was broken when OLLAMA_API_KEY came from instance env rather than global env). - **OllamaAdapter.ts**: tracks `Fiber` from `runFork` in `context.activeFiber`. Finalizer and `stopSession` now interrupt the fiber — no more leaks on session shutdown. `rollbackThread` uses per-turn message indices instead of `numTurns*2`, correctly handling tool-call turns. - **OllamaTextGeneration.ts**: default fallback model aligned to `llama3.2` (matches `contracts/DEFAULT_MODEL_BY_PROVIDER`). ## Why The Ollama provider runtime was built on `Effect.promise` which converts errors to `die()` defects (unrecoverable). Network flakiness, timeouts, or DNS failures would crash the Effect fiber instead of being catchable `fail()` errors. Each call site has been migrated to `Effect.tryPromise` so all network-layer errors flow through the standard error channel. The session lifecycle had two leaks: (1) fibers not interrupted on `stopSession` or finalizer (would keep running after session teardown), and (2) `rollbackThread` using `numTurns*2` which breaks when turns contain tool-call exchanges (odd number of messages). Both fixed. The `processEnv` isolation bug meant `checkOllamaProviderStatus` read the global environment, so instances configured via workspace env (not global) would fail provider health checks. ## Checklist - [x] This PR is small and focused - [x] I explained what changed and why - [N/A] I included before/after screenshots for any UI changes - [N/A] I included a video for animation/interaction changes
b4ede4e to
21acf51
Compare
- interruptTurn no longer permanently closes the session: sendTurn now clears the stopped flag instead of rejecting with SessionClosedError - rollbackThread works: turnMessageIndices is populated at each turn start, so whole turns (incl. tool messages) splice correctly - default model unified to qwen2.5:7b across contracts, adapter and text generation (was llama3.2 in contracts only) - ollamaVersion probe wrapped in Effect.exit so an unreachable server no longer crashes the whole status check - ollamaListModels reads modified_at (snake_case) from /api/tags - OllamaTextGeneration catches OllamaRuntimeError by tag instead of a blanket mapError, avoiding double-wrapped TextGenerationError Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| if (!text || text.length === 0) { | ||
| return yield* new ProviderAdapterValidationError({ provider: PROVIDER, operation: "sendTurn", issue: "Ollama turns require text input." }); | ||
| } | ||
| const model = input.modelSelection?.model ?? context.activeModel; |
There was a problem hiding this comment.
Empty model string not handled in sendTurn
Medium Severity
The model resolution in sendTurn uses nullish coalescing (??), which won't fall back to context.activeModel when input.modelSelection?.model is an empty string "". By contrast, startSession correctly checks trim().length > 0 before using the provided model. If the client sends a modelSelection with an empty model field, sendTurn will pass "" as the model name to ollamaChat, causing the API call to fail.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 6c74fed. Configure here.
| session: ProviderSession; | ||
| readonly threadId: ThreadId; | ||
| readonly messages: OllamaChatMessage[]; | ||
| readonly runtimeEvents: Queue.Queue<ProviderRuntimeEvent>; |
There was a problem hiding this comment.
Unused runtimeEvents field stored in session context
Low Severity
The OllamaSessionContext interface declares a runtimeEvents field (line 44) and each session stores a reference to the shared queue (line 125), but this field is never accessed through the session context. The emit function at line 98 captures the outer runtimeEvents variable directly via closure. This is dead storage that adds confusion about whether events are per-session or per-adapter.
Reviewed by Cursor Bugbot for commit 6c74fed. Configure here.
Ollama Cloud's /api/version returns {"version":"0.0.0"} as a
placeholder. Treat it as no version so the provider list shows no
version chip instead of a meaningless "v0.0.0".
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 4 total unresolved issues (including 3 from previous reviews).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 6217bd6. Configure here.
interruptTurn only flipped the stopped flag; if the turn fiber was mid-ollamaChat HTTP call it would not observe the flag before the next sendTurn reset it to false and forked a second fiber, so two fibers mutated context.messages concurrently. - sendTurn now interrupts and awaits any prior activeFiber before resetting the flag and forking, guaranteeing a single live fiber. - interruptTurn interrupts the fiber instead of only setting the flag. - turn.completed is emitted from a single Effect.onExit handler, so it fires exactly once whether the turn completes, fails, or is interrupted (previously skipped when the fiber was interrupted). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>


What changed
Adds Ollama as a first-class provider driver, alongside Codex, Claude, OpenCode and Cursor — backed by a local or cloud Ollama runtime.
Server (
apps/server)ollamaRuntime.ts— HTTP client for the Ollama REST API:/api/chat(streaming + non-streaming),/api/tags,/api/version. Bearer auth viaOLLAMA_API_KEY.OllamaTools.ts— 7 built-in tools the model can call:read_file,write_file,edit_file,bash,list_directory,search_files,web_fetch.OllamaAdapter.ts— session lifecycle + tool-calling loop, reusing the shared approval flow (request.opened→Deferred→respondToRequest).OllamaProvider.ts— status probe (version + model list) on a 5-minute refresh.OllamaDriver.ts— wires adapter + text generation + snapshot into one driver; registered inbuiltInDrivers.ts.OllamaTextGeneration.ts— commit message / PR content / branch name / thread title generation.Contracts (
packages/contracts)OllamaSettingsschema (enabled,baseUrl,model,customModels) + patch schema.ollamadriver kind and default models registered for model-selection routing.Frontend (
apps/web)OllamaIconSVG, wired into the settings panel and the provider icon map.Bot-review fixes — issues raised by Cursor Bugbot / Macroscope, addressed across follow-up commits:
interruptTurnno longer permanently closes the session.turn.completedis emitted exactly once per turn viaEffect.onExit.rollbackThreadworks — per-turn message offsets are tracked, so whole turns (incl. tool messages) splice correctly.qwen2.5:7bacross contracts, adapter and text generation.Effect.exit— an unreachable server no longer crashes the whole status check; Ollama Cloud's placeholder0.0.0is hidden./api/tagsparsing reads snake_casemodified_at.mapError, avoiding double-wrappedTextGenerationError.Why
T3 Code only supported cloud agent providers. Ollama adds a local-first option (and Ollama Cloud) so users can run a coding agent against models on their own hardware — with the same approval-gated tool flow as the other providers.
Screenshots
Provider list — Ollama with its icon, version chip and discovered models:

Checklist
Note
High Risk
Adds a new provider runtime that can execute host-side commands and file writes via model tool-calls (approval-gated but still security-sensitive) and introduces new network/streaming code paths for chat, model listing, and status probing.
Overview
Adds Ollama as a first-class provider across server, contracts, and web UI.
On the server, introduces a full Ollama driver stack: an HTTP runtime for
/api/chat(plus streaming parser),/api/tags, and/api/version; a session adapter that runs turns, emits runtime events, and supports approval-gated tool calls (includingbash/file operations) with interruption/rollback handling; a provider status probe + model discovery snapshot refreshed on an interval; and a text-generation implementation for commit/PR/branch/thread outputs.In contracts and web, adds
OllamaSettings(and patch support), registers default model routing for theollamadriver kind, and updates the UI to surface Ollama in provider pickers/settings plus a dedicatedOllamaIcon.Reviewed by Cursor Bugbot for commit 6f18cef. Bugbot is set up for automated code reviews on this repo. Configure here.
Note
Add Ollama as a local and cloud LLM provider runtime
ollamaprovider driver (OllamaDriver.ts) that manages provider lifecycle, status refresh, and wires into the existing built-in driver registry.OllamaSettingsto server config contracts and an Ollama icon and settings definition to the web UI.bash) and filesystem write access; these are approval-gated but run as the server process with no sandbox.Macroscope summarized 6f18cef.