diff --git a/.gitignore b/.gitignore index 9abc662..0d34827 100644 --- a/.gitignore +++ b/.gitignore @@ -28,6 +28,7 @@ builds/typescript/.your-memory.root-owned.backup/ # Dev notes / server-specific docs (contain IPs, internal ports) builds/typescript/status.md +docs/Security/ # Local MCP server overrides builds/typescript/mcp/servers.local.json diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 563d9d1..6edac31 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -38,7 +38,7 @@ This builds and starts everything in Docker. See the [README](README.md) for pre ## Build on It -BrainDrive is built on the [Personal AI Architecture](https://github.com/BrainDriveAI/personal-ai-architecture) and is MIT-licensed. You can use it, extend it, and build on it without waiting for permission. +BrainDrive is built on the [Personal AI Architecture](https://github.com/Personal-AI-Architecture/the-architecture) and is MIT-licensed. You can use it, extend it, and build on it without waiting for permission. ## License diff --git a/README.md b/README.md index b25fd76..1a220d2 100644 --- a/README.md +++ b/README.md @@ -25,6 +25,7 @@ Other AI tools chat. BrainDrive partners with you to get things done. - **A structured path to your goals** — interview → spec → action plan → ongoing partnership - **Life areas built in** — Career, Relationships, Fitness, Finance, plus create your own projects - **Your data stays yours** — conversations, memory, and files live on your machine +- **Memory backup modes** — push memory snapshots to your own Git repo (manual or scheduled) - **Any AI model** — cloud models via API, local models via Ollama, or both - **One install** — runs in Docker on Linux, macOS, and WSL - **MIT licensed** — fork it, extend it, make it yours @@ -68,9 +69,17 @@ irm https://raw.githubusercontent.com/BrainDriveAI/BrainDrive/main/installer/boo 5. **Plan** — the spec becomes an action plan with concrete steps, phases, and milestones. 6. **Partner** — come back anytime. Your AI remembers everything and helps you stay on track, adjust plans, and make progress. -## Architecture +## For Developers + +BrainDrive is built on the [Personal AI Architecture](https://github.com/Personal-AI-Architecture/the-architecture) (PAA) — an open, MIT-licensed standard for user-owned AI systems. Think of PAA as the spec and BrainDrive as the implementation. Anyone can build on the architecture; BrainDrive is our take on it. + +| I want to... | Start here | +|--------------|------------| +| **Understand the architecture** | [Personal AI Architecture](https://github.com/Personal-AI-Architecture/the-architecture) — foundation spec, component contracts, conformance tests, zero lock-in by design | +| **Build with AI assistance** | [Architecture Primer](https://github.com/Personal-AI-Architecture/the-architecture/tree/main/docs/ai) — token-optimized reference files designed to hand directly to your AI agent. Compliance matrix, component primers, audit playbooks, canonical examples. | +| **Hack on BrainDrive** | [CONTRIBUTING.md](CONTRIBUTING.md) — fork, build, run tests, submit a PR | -BrainDrive implements the [Personal AI Architecture](https://github.com/BrainDriveAI/personal-ai-architecture) (PAA) — an open spec for user-owned AI systems. Every component is swappable. Your Memory is the foundation; everything else can be replaced. +## Architecture ```mermaid flowchart LR @@ -105,6 +114,28 @@ The system runs as two Docker containers: an app server (Gateway + tools) and an See [`installer/docker/README.md`](installer/docker/README.md) for production deployment, Windows equivalents, and advanced operations. +## Memory Backup (MVP) + +BrainDrive includes a local-only **Memory Backup** settings tab for backing up memory snapshots to your own HTTPS Git repository. + +What it supports: + +1. Configure repository URL, token, and frequency in **Settings -> Memory Backup** +2. Run immediate backup with **Save Now** +3. Run scheduled backups in `after_changes`, `hourly`, or `daily` modes +4. Restore memory from backup branch snapshots + +Important safety behavior: + +1. Restore is **memory-only**. Secrets are not restored from git backup. +2. Backup repository URL must be `https://` (SSH URLs are rejected). +3. Token is stored as a vault secret reference, not plaintext preferences. + +Setup and validation instructions: + +1. Operator notes: [`installer/docker/README.md`](installer/docker/README.md) +2. Step-by-step local test flow: [`docs/onboarding/getting-started-testing-openrouter-docker.md`](docs/onboarding/getting-started-testing-openrouter-docker.md) + ## Operator Quick Usage Support bundle script: @@ -133,7 +164,7 @@ braindrive/ ## Built With -- [Personal AI Architecture](https://github.com/BrainDriveAI/personal-ai-architecture) — the open foundation spec +- [Personal AI Architecture](https://github.com/Personal-AI-Architecture/the-architecture) — the open foundation spec - TypeScript, Fastify, React, Tailwind CSS - Docker and Caddy for deployment - [MCP](https://modelcontextprotocol.io/) for tool integration diff --git a/ROADMAP.md b/ROADMAP.md index 7941b83..acd35e1 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -4,7 +4,7 @@ BrainDrive is built in five phases. Each phase makes your AI system more capable. At scale, something bigger emerges — when millions of owner-controlled AI systems connect, the sum becomes greater than the parts. -Built on the [Personal AI Architecture](https://github.com/BrainDriveAI/personal-ai-architecture) — an open, MIT-licensed foundation for building personal AI systems with zero lock-in. +Built on the [Personal AI Architecture](https://github.com/Personal-AI-Architecture/the-architecture) — an open, MIT-licensed foundation for building personal AI systems with zero lock-in. --- @@ -126,7 +126,7 @@ Phase 2 builds the partnership. Phase 3 gets you off the keyboard. BrainDrive is MIT-licensed and open source. We welcome contributions at every level: - **Use it** — install BrainDrive, try the interview flow, report what works and what doesn't -- **Build on it** — the [Personal AI Architecture](https://github.com/BrainDriveAI/personal-ai-architecture) is designed for anyone to build on +- **Build on it** — the [Personal AI Architecture](https://github.com/Personal-AI-Architecture/the-architecture) is designed for anyone to build on - **Contribute code** — check [open issues](https://github.com/BrainDriveAI/braindrive/issues) or pick something from the roadmap above - **Join the community** — [community.braindrive.ai](https://community.braindrive.ai) diff --git a/builds/typescript/adapters/openai-compatible.json b/builds/typescript/adapters/openai-compatible.json index e75dda2..cc7dc3a 100644 --- a/builds/typescript/adapters/openai-compatible.json +++ b/builds/typescript/adapters/openai-compatible.json @@ -1,25 +1,25 @@ { "base_url": "https://my.braindrive.ai/credits/v1", - "model": "claude-sonnet-4-6", + "model": "claude-haiku-4-5", "api_key_env": "AI_GATEWAY_API_KEY", "provider_id": "braindrive-models", "default_provider_profile": "braindrive-models", "provider_profiles": { "braindrive-models": { "base_url": "https://my.braindrive.ai/credits/v1", - "model": "claude-sonnet-4-6", + "model": "claude-haiku-4-5", "api_key_env": "AI_GATEWAY_API_KEY", "provider_id": "braindrive-models" }, "openrouter": { "base_url": "https://openrouter.ai/api/v1", - "model": "anthropic/claude-sonnet-4.6", + "model": "anthropic/claude-haiku-4.5", "api_key_env": "OPENROUTER_API_KEY", "provider_id": "openrouter" }, "ollama": { - "base_url": "http://127.0.0.1:11434/v1", - "model": "llama3.1", + "base_url": "http://host.docker.internal:11434/v1", + "model": "", "api_key_env": "OLLAMA_API_KEY", "provider_id": "ollama" } diff --git a/builds/typescript/client_web/package-lock.json b/builds/typescript/client_web/package-lock.json index 77a4cb5..cddc5e6 100644 --- a/builds/typescript/client_web/package-lock.json +++ b/builds/typescript/client_web/package-lock.json @@ -1,12 +1,12 @@ { "name": "braindrive-client", - "version": "0.1.6", + "version": "26.4.8", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "braindrive-client", - "version": "0.1.6", + "version": "26.4.8", "license": "MIT", "dependencies": { "@ai-sdk/react": "^3.0.118", diff --git a/builds/typescript/client_web/package.json b/builds/typescript/client_web/package.json index 463d277..9033fbc 100644 --- a/builds/typescript/client_web/package.json +++ b/builds/typescript/client_web/package.json @@ -2,7 +2,7 @@ "name": "braindrive-client", "description": "BrainDrive web client", "private": true, - "version": "0.1.6", + "version": "26.4.8", "type": "module", "license": "MIT", "repository": { diff --git a/builds/typescript/client_web/src/api/gateway-adapter.test.ts b/builds/typescript/client_web/src/api/gateway-adapter.test.ts index 4cf1b7a..06602d6 100644 --- a/builds/typescript/client_web/src/api/gateway-adapter.test.ts +++ b/builds/typescript/client_web/src/api/gateway-adapter.test.ts @@ -6,19 +6,23 @@ import { getOnboardingStatus, getProviderModels, importLibraryArchive, + restoreMemoryBackup, + runMemoryBackupNow, sendMessage, + updateMemoryBackupSettings, updateProviderCredential, type ChatEvent, } from "./gateway-adapter"; -function sseResponse(frames: string): Response { - return new Response(frames, { - status: 200, - headers: { - "content-type": "text/event-stream", - }, - }); -} +function sseResponse(frames: string, headers?: Record): Response { + return new Response(frames, { + status: 200, + headers: { + "content-type": "text/event-stream", + ...(headers ?? {}), + }, + }); +} async function collectEvents(stream: AsyncIterable): Promise { const events: ChatEvent[] = []; @@ -64,7 +68,7 @@ describe("gateway-adapter SSE parsing", () => { ]); }); - it("maps legacy text-delta content field to delta", async () => { + it("maps legacy text-delta content field to delta", async () => { vi.stubGlobal( "fetch", vi.fn(async () => @@ -82,12 +86,54 @@ describe("gateway-adapter SSE parsing", () => { ); const events = await collectEvents(sendMessage(null, "hi")); - expect(events[0]).toMatchObject({ - type: "text-delta", - delta: "Legacy format", - }); - }); -}); + expect(events[0]).toMatchObject({ + type: "text-delta", + delta: "Legacy format", + }); + }); + + it("exposes context-window warnings from response headers", async () => { + const onContextWarning = vi.fn(); + vi.stubGlobal( + "fetch", + vi.fn(async () => + sseResponse( + [ + "event: done", + 'data: {"finish_reason":"stop","conversation_id":"conv_3"}', + "", + ].join("\n"), + { + "x-context-window-warning": "1", + "x-context-window-estimated-tokens": "90000", + "x-context-window-budget-tokens": "100000", + "x-context-window-ratio": "0.9", + "x-context-window-threshold": "0.8", + "x-context-window-managed": "1", + "x-context-window-message": "This session is getting long.", + } + ) + ) + ); + + const events = await collectEvents(sendMessage(null, "hi", { onContextWarning })); + expect(events).toEqual([ + { + type: "done", + finish_reason: "stop", + conversation_id: "conv_3", + }, + ]); + expect(onContextWarning).toHaveBeenCalledWith({ + estimated_tokens: 90000, + budget_tokens: 100000, + ratio: 0.9, + threshold: 0.8, + managed: true, + message: "This session is getting long.", + }); + }); +}); describe("gateway-adapter settings models", () => { beforeEach(() => { @@ -166,14 +212,15 @@ describe("gateway-adapter onboarding settings", () => { const fetchMock = vi.fn(async () => new Response( JSON.stringify({ - settings: { - default_model: "openai/gpt-4o-mini", - approval_mode: "ask-on-write", - active_provider_profile: "openrouter", - default_provider_profile: "openrouter", - available_models: ["openai/gpt-4o-mini"], - provider_profiles: [], - }, + settings: { + default_model: "openai/gpt-4o-mini", + approval_mode: "ask-on-write", + active_provider_profile: "openrouter", + default_provider_profile: "openrouter", + available_models: ["openai/gpt-4o-mini"], + provider_profiles: [], + memory_backup: null, + }, onboarding: { onboarding_required: false, active_provider_profile: "openrouter", @@ -221,6 +268,7 @@ describe("gateway-adapter onboarding settings", () => { default_provider_profile: "openrouter", available_models: ["openai/gpt-4o-mini"], provider_profiles: [], + memory_backup: null, }, }), { @@ -241,4 +289,136 @@ describe("gateway-adapter onboarding settings", () => { }) ); }); + + it("updates memory backup settings through the dedicated endpoint", async () => { + const fetchMock = vi.fn(async () => + new Response( + JSON.stringify({ + default_model: "openai/gpt-4o-mini", + approval_mode: "ask-on-write", + active_provider_profile: "openrouter", + default_provider_profile: "openrouter", + available_models: ["openai/gpt-4o-mini"], + provider_profiles: [], + memory_backup: { + repository_url: "https://github.com/BrainDriveAI/braindrive-memory.git", + frequency: "manual", + token_configured: true, + last_result: "never", + last_error: null, + }, + }), + { + status: 200, + headers: { "content-type": "application/json" }, + } + ) + ); + vi.stubGlobal("fetch", fetchMock); + + await updateMemoryBackupSettings({ + repository_url: "https://github.com/BrainDriveAI/braindrive-memory.git", + frequency: "manual", + git_token: "ghp_test", + }); + + expect(fetchMock).toHaveBeenCalledWith( + "/api/settings/memory-backup", + expect.objectContaining({ + method: "PUT", + headers: expect.objectContaining({ "Content-Type": "application/json" }), + }) + ); + }); + + it("triggers manual memory backup save", async () => { + const fetchMock = vi.fn(async () => + new Response( + JSON.stringify({ + result: { + attempted_at: "2026-04-07T12:00:00.000Z", + saved_at: "2026-04-07T12:00:01.000Z", + result: "success", + }, + settings: { + default_model: "openai/gpt-4o-mini", + approval_mode: "ask-on-write", + active_provider_profile: "openrouter", + default_provider_profile: "openrouter", + available_models: ["openai/gpt-4o-mini"], + provider_profiles: [], + memory_backup: { + repository_url: "https://github.com/BrainDriveAI/braindrive-memory.git", + frequency: "manual", + token_configured: true, + last_result: "success", + last_error: null, + last_save_at: "2026-04-07T12:00:01.000Z", + }, + }, + }), + { + status: 200, + headers: { "content-type": "application/json" }, + } + ) + ); + vi.stubGlobal("fetch", fetchMock); + + const payload = await runMemoryBackupNow(); + expect(payload.result.result).toBe("success"); + expect(fetchMock).toHaveBeenCalledWith( + "/api/settings/memory-backup/save", + expect.objectContaining({ + method: "POST", + headers: expect.objectContaining({ "Content-Type": "application/json" }), + }) + ); + }); + + it("triggers memory backup restore", async () => { + const fetchMock = vi.fn(async () => + new Response( + JSON.stringify({ + result: { + attempted_at: "2026-04-07T12:10:00.000Z", + restored_at: "2026-04-07T12:10:03.000Z", + commit: "abc123def456", + source_branch: "braindrive-memory-backup", + warnings: [], + }, + settings: { + default_model: "openai/gpt-4o-mini", + approval_mode: "ask-on-write", + active_provider_profile: "openrouter", + default_provider_profile: "openrouter", + available_models: ["openai/gpt-4o-mini"], + provider_profiles: [], + memory_backup: { + repository_url: "https://github.com/BrainDriveAI/braindrive-memory.git", + frequency: "manual", + token_configured: true, + last_result: "success", + last_error: null, + }, + }, + }), + { + status: 200, + headers: { "content-type": "application/json" }, + } + ) + ); + vi.stubGlobal("fetch", fetchMock); + + const payload = await restoreMemoryBackup({ target_commit: "abc123def456" }); + expect(payload.result.commit).toBe("abc123def456"); + expect(fetchMock).toHaveBeenCalledWith( + "/api/settings/memory-backup/restore", + expect.objectContaining({ + method: "POST", + headers: expect.objectContaining({ "Content-Type": "application/json" }), + }) + ); + }); }); diff --git a/builds/typescript/client_web/src/api/gateway-adapter.ts b/builds/typescript/client_web/src/api/gateway-adapter.ts index 46cbbfd..a188438 100644 --- a/builds/typescript/client_web/src/api/gateway-adapter.ts +++ b/builds/typescript/client_web/src/api/gateway-adapter.ts @@ -5,15 +5,20 @@ import { parseSSE } from "./sse-parser"; import { GatewayError, GatewayNotFoundError, - type ApprovalDecision, - type ChatEvent, - type Conversation, - type ConversationDetail, - type GatewayCredentialUpdateRequest, - type GatewayCredentialUpdateResponse, - type GatewayMigrationImportResult, - type GatewayModelCatalog, - type GatewayOnboardingStatus, + type ApprovalDecision, + type ChatEvent, + type Conversation, + type ConversationDetail, + type ContextWindowWarning, + type GatewayCredentialUpdateRequest, + type GatewayCredentialUpdateResponse, + type GatewayMemoryBackupRestoreRequest, + type GatewayMemoryBackupRestoreResponse, + type GatewayMemoryBackupRunResponse, + type GatewayMemoryBackupSettingsUpdateRequest, + type GatewayMigrationImportResult, + type GatewayModelCatalog, + type GatewayOnboardingStatus, type GatewaySkillBinding, type GatewaySkillSummary, type GatewaySettings, @@ -32,10 +37,11 @@ type ConversationListResponse = { offset: number; }; -type SendMessageOptions = { - signal?: AbortSignal; - metadata?: Record; -}; +type SendMessageOptions = { + signal?: AbortSignal; + metadata?: Record; + onContextWarning?: (warning: ContextWindowWarning) => void; +}; type ErrorPayload = { code?: string; @@ -184,7 +190,7 @@ function toChatEvent(eventName: string, data: string): ChatEvent { return normalized; } -function normalizeChatEventPayload(eventName: string, parsed: unknown): unknown { +function normalizeChatEventPayload(eventName: string, parsed: unknown): unknown { if (!isRecord(parsed)) { return parsed; } @@ -212,8 +218,34 @@ function normalizeChatEventPayload(eventName: string, parsed: unknown): unknown }; } - return withType; -} + return withType; +} + +function parseContextWindowWarning(headers: Headers): ContextWindowWarning | null { + if (headers.get("x-context-window-warning") !== "1") { + return null; + } + + const estimatedTokens = Number.parseInt(headers.get("x-context-window-estimated-tokens") ?? "", 10); + const budgetTokens = Number.parseInt(headers.get("x-context-window-budget-tokens") ?? "", 10); + const ratio = Number.parseFloat(headers.get("x-context-window-ratio") ?? ""); + const threshold = Number.parseFloat(headers.get("x-context-window-threshold") ?? ""); + const managed = headers.get("x-context-window-managed") === "1"; + const message = headers.get("x-context-window-message") ?? "This session is getting long."; + + if (!Number.isFinite(estimatedTokens) || !Number.isFinite(budgetTokens) || !Number.isFinite(ratio)) { + return null; + } + + return { + estimated_tokens: estimatedTokens, + budget_tokens: budgetTokens, + ratio, + threshold: Number.isFinite(threshold) ? threshold : 0.8, + managed, + message, + }; +} export async function* sendMessage( conversationId: string | null, @@ -237,14 +269,19 @@ export async function* sendMessage( signal: options.signal }); - if (!response.ok) { - throw await toGatewayError(response); - } - - for await (const event of parseSSE(response)) { - yield toChatEvent(event.event, event.data); - } -} + if (!response.ok) { + throw await toGatewayError(response); + } + + const contextWarning = parseContextWindowWarning(response.headers); + if (contextWarning) { + options.onContextWarning?.(contextWarning); + } + + for await (const event of parseSSE(response)) { + yield toChatEvent(event.event, event.data); + } +} export async function listConversations(): Promise { const response = await authenticatedFetch(`${GATEWAY_BASE_URL}/conversations`, { @@ -526,9 +563,9 @@ export async function getSettings(): Promise { return (await response.json()) as GatewaySettings; } -export async function updateSettings( - patch: Partial> & { - provider_base_url?: { provider_profile: string; base_url: string }; +export async function updateSettings( + patch: Partial> & { + provider_base_url?: { provider_profile: string; base_url: string }; } ): Promise { const response = await authenticatedFetch(`${GATEWAY_BASE_URL}/settings`, { @@ -540,9 +577,55 @@ export async function updateSettings( if (!response.ok) { throw await toGatewayError(response); } - - return (await response.json()) as GatewaySettings; -} + + return (await response.json()) as GatewaySettings; +} + +export async function updateMemoryBackupSettings( + payload: GatewayMemoryBackupSettingsUpdateRequest +): Promise { + const response = await authenticatedFetch(`${GATEWAY_BASE_URL}/settings/memory-backup`, { + method: "PUT", + headers: withLocalOwnerHeaders({ "Content-Type": "application/json" }), + body: JSON.stringify(payload), + }); + + if (!response.ok) { + throw await toGatewayError(response); + } + + return (await response.json()) as GatewaySettings; +} + +export async function runMemoryBackupNow(): Promise { + const response = await authenticatedFetch(`${GATEWAY_BASE_URL}/settings/memory-backup/save`, { + method: "POST", + headers: withLocalOwnerHeaders({ "Content-Type": "application/json" }), + body: JSON.stringify({}), + }); + + if (!response.ok) { + throw await toGatewayError(response); + } + + return (await response.json()) as GatewayMemoryBackupRunResponse; +} + +export async function restoreMemoryBackup( + payload: GatewayMemoryBackupRestoreRequest = {} +): Promise { + const response = await authenticatedFetch(`${GATEWAY_BASE_URL}/settings/memory-backup/restore`, { + method: "POST", + headers: withLocalOwnerHeaders({ "Content-Type": "application/json" }), + body: JSON.stringify(payload), + }); + + if (!response.ok) { + throw await toGatewayError(response); + } + + return (await response.json()) as GatewayMemoryBackupRestoreResponse; +} export async function getOwnerProfile(): Promise { const response = await authenticatedFetch(`${GATEWAY_BASE_URL}/profile`, { diff --git a/builds/typescript/client_web/src/api/types.ts b/builds/typescript/client_web/src/api/types.ts index 985362f..47eae1c 100644 --- a/builds/typescript/client_web/src/api/types.ts +++ b/builds/typescript/client_web/src/api/types.ts @@ -94,7 +94,17 @@ export type ChatEvent = | ChatErrorEvent | DoneEvent; +export type ContextWindowWarning = { + estimated_tokens: number; + budget_tokens: number; + ratio: number; + threshold: number; + managed: boolean; + message: string; +}; + export type ApprovalDecision = "approved" | "denied"; +export type ApprovalMode = "ask-on-write" | "auto-approve"; export type PendingApproval = { requestId: string; @@ -131,13 +141,62 @@ export type GatewayProviderProfile = { credential_ref: string | null; }; +export type GatewayMemoryBackupFrequency = "manual" | "after_changes" | "hourly" | "daily"; + +export type GatewayMemoryBackupSettings = { + repository_url: string; + frequency: GatewayMemoryBackupFrequency; + token_configured: boolean; + last_save_at?: string; + last_attempt_at?: string; + last_result: "never" | "success" | "failed"; + last_error: string | null; +}; + +export type GatewayMemoryBackupSettingsUpdateRequest = { + repository_url: string; + frequency: GatewayMemoryBackupFrequency; + git_token?: string; + token_secret_ref?: string; +}; + +export type GatewayMemoryBackupRunResult = { + attempted_at: string; + saved_at?: string; + result: "success" | "failed" | "noop"; + message?: string; +}; + +export type GatewayMemoryBackupRestoreRequest = { + target_commit?: string; +}; + +export type GatewayMemoryBackupRestoreResult = { + attempted_at: string; + restored_at: string; + commit: string; + source_branch: string; + warnings: string[]; +}; + +export type GatewayMemoryBackupRunResponse = { + result: GatewayMemoryBackupRunResult; + settings: GatewaySettings; +}; + +export type GatewayMemoryBackupRestoreResponse = { + result: GatewayMemoryBackupRestoreResult; + settings: GatewaySettings; +}; + export type GatewaySettings = { default_model: string; - approval_mode: "ask-on-write"; + approval_mode: ApprovalMode; active_provider_profile: string | null; default_provider_profile: string | null; available_models: string[]; provider_profiles: GatewayProviderProfile[]; + memory_backup: GatewayMemoryBackupSettings | null; }; export type GatewayOnboardingProvider = { diff --git a/builds/typescript/client_web/src/api/useGatewayChat.test.tsx b/builds/typescript/client_web/src/api/useGatewayChat.test.tsx index 49081af..a97adfe 100644 --- a/builds/typescript/client_web/src/api/useGatewayChat.test.tsx +++ b/builds/typescript/client_web/src/api/useGatewayChat.test.tsx @@ -7,7 +7,18 @@ const sendMessageMock = vi.fn< ( conversationId: string | null, content: string, - options?: { signal?: AbortSignal; metadata?: Record } + options?: { + signal?: AbortSignal; + metadata?: Record; + onContextWarning?: (warning: { + estimated_tokens: number; + budget_tokens: number; + ratio: number; + threshold: number; + managed: boolean; + message: string; + }) => void; + } ) => AsyncIterable >(); @@ -41,7 +52,18 @@ vi.mock("./gateway-adapter", () => ({ sendMessage: ( conversationId: string | null, content: string, - options?: { signal?: AbortSignal; metadata?: Record } + options?: { + signal?: AbortSignal; + metadata?: Record; + onContextWarning?: (warning: { + estimated_tokens: number; + budget_tokens: number; + ratio: number; + threshold: number; + managed: boolean; + message: string; + }) => void; + } ) => sendMessageMock(conversationId, content, options), submitApprovalDecision: (requestId: string, decision: "approved" | "denied") => submitApprovalDecisionMock(requestId, decision), @@ -160,12 +182,76 @@ describe("useGatewayChat", () => { }); expect(result.current.error?.message).toBe("Provider unavailable"); + expect(result.current.errorCode).toBe("provider_error"); expect(result.current.messages).toEqual([ { id: "message-1", role: "user", content: "Hello" } ]); }); - it("tracks pending approvals and resolves decisions", async () => { + it("stores context overflow error code for overflow-specific UI actions", async () => { + sendMessageMock.mockImplementation(() => + streamEvents([ + { + type: "error", + code: "context_overflow", + message: "This session has gotten long. Start a new conversation to continue - all your work is saved.", + }, + ]) + ); + + const { result } = renderHook(() => useGatewayChat()); + + act(() => { + result.current.append("Hello"); + }); + + await waitFor(() => { + expect(result.current.isLoading).toBe(false); + }); + + expect(result.current.errorCode).toBe("context_overflow"); + }); + + it("stores context warning metadata passed from the gateway adapter", async () => { + sendMessageMock.mockImplementation((_conversationId, _content, options) => + (async function* contextWarningStream() { + options?.onContextWarning?.({ + estimated_tokens: 90_000, + budget_tokens: 100_000, + ratio: 0.9, + threshold: 0.8, + managed: true, + message: "This session is getting long. Earlier turns were compacted so you can keep chatting.", + }); + yield { + type: "done", + finish_reason: "stop", + conversation_id: "conv-warning", + } as ChatEvent; + })() + ); + + const { result } = renderHook(() => useGatewayChat()); + + act(() => { + result.current.append("Hello"); + }); + + await waitFor(() => { + expect(result.current.isLoading).toBe(false); + }); + + expect(result.current.contextWindowWarning).toEqual({ + estimated_tokens: 90_000, + budget_tokens: 100_000, + ratio: 0.9, + threshold: 0.8, + managed: true, + message: "This session is getting long. Earlier turns were compacted so you can keep chatting.", + }); + }); + + it("auto-approves approval requests during streaming", async () => { sendMessageMock.mockImplementation(() => streamEvents([ { @@ -174,6 +260,11 @@ describe("useGatewayChat", () => { tool_name: "memory_write", summary: "Write documents/plan.md", }, + { + type: "approval-result", + request_id: "apr-1", + decision: "approved", + }, { type: "done", finish_reason: "stop", @@ -196,19 +287,6 @@ describe("useGatewayChat", () => { expect(result.current.isLoading).toBe(false); }); - expect(result.current.pendingApprovals).toEqual([ - { - requestId: "apr-1", - toolName: "memory_write", - summary: "Write documents/plan.md", - createdAt: expect.any(String), - }, - ]); - - await act(async () => { - await result.current.resolveApproval("apr-1", "approved"); - }); - expect(submitApprovalDecisionMock).toHaveBeenCalledWith("apr-1", "approved"); expect(result.current.pendingApprovals).toEqual([]); expect(result.current.activity.some((item) => item.type === "approval-request")).toBe(true); diff --git a/builds/typescript/client_web/src/api/useGatewayChat.ts b/builds/typescript/client_web/src/api/useGatewayChat.ts index 0487a6e..e262bc1 100644 --- a/builds/typescript/client_web/src/api/useGatewayChat.ts +++ b/builds/typescript/client_web/src/api/useGatewayChat.ts @@ -11,7 +11,7 @@ import { updateConversationSkills, updateProjectSkills, } from "./gateway-adapter"; -import type { ActivityEvent, ApprovalDecision, PendingApproval } from "./types"; +import type { ActivityEvent, ApprovalDecision, ContextWindowWarning, PendingApproval } from "./types"; const EMPTY_MESSAGES: Message[] = []; const EMPTY_ACTIVITY: ActivityEvent[] = []; @@ -35,9 +35,11 @@ type ConversationState = { messages: Message[]; isLoading: boolean; error: Error | null; + errorCode: string | null; toolStatus: string | null; pendingApprovals: PendingApproval[]; activity: ActivityEvent[]; + contextWindowWarning: ContextWindowWarning | null; conversationId: string | null; abortController: AbortController | null; requestToken: number; @@ -75,13 +77,16 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { messages: Message[]; isLoading: boolean; error: Error | null; + errorCode: string | null; conversationId: string | null; toolStatus: string | null; pendingApprovals: PendingApproval[]; activity: ActivityEvent[]; + contextWindowWarning: ContextWindowWarning | null; append: (content: string, options?: { metadata?: Record }) => void; resolveApproval: (requestId: string, decision: ApprovalDecision) => Promise; stop: () => void; + startNewConversation: () => void; } { const externalConversationId = options.conversationId ?? null; const externalProjectId = options.projectId ?? null; @@ -97,6 +102,7 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { const [messages, setMessages] = useState(cached?.messages ?? externalMessages); const [isLoading, setIsLoading] = useState(cached?.isLoading ?? false); const [error, setError] = useState(cached?.error ?? null); + const [errorCode, setErrorCode] = useState(cached?.errorCode ?? null); const [conversationId, setConversationId] = useState( cached?.conversationId ?? externalConversationId ); @@ -105,6 +111,9 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { cached?.pendingApprovals ?? EMPTY_APPROVALS ); const [activity, setActivity] = useState(cached?.activity ?? EMPTY_ACTIVITY); + const [contextWindowWarning, setContextWindowWarning] = useState( + cached?.contextWindowWarning ?? null + ); const abortControllerRef = useRef(cached?.abortController ?? null); const requestTokenRef = useRef(cached?.requestToken ?? 0); @@ -130,9 +139,11 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { setIsLoading(false); setError(null); + setErrorCode(null); setToolStatus(null); setPendingApprovals([]); setActivity([]); + setContextWindowWarning(null); } window.addEventListener(GATEWAY_CHAT_RUNTIME_RESET_EVENT, handleRuntimeReset); @@ -151,9 +162,11 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { messages: messages, isLoading, error, + errorCode, toolStatus, pendingApprovals, activity, + contextWindowWarning, conversationId, abortController: abortControllerRef.current, requestToken: requestTokenRef.current, @@ -170,9 +183,11 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { setMessages(restored.messages); setIsLoading(restored.isLoading); setError(restored.error); + setErrorCode(restored.errorCode); setToolStatus(restored.toolStatus); setPendingApprovals(restored.pendingApprovals); setActivity(restored.activity); + setContextWindowWarning(restored.contextWindowWarning); setConversationId(restored.conversationId); abortControllerRef.current = restored.abortController; requestTokenRef.current = restored.requestToken; @@ -184,12 +199,14 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { // Start empty when switching conversations; externalMessages can be stale // from the previous project's history that has not cleared yet. A later // effect repopulates the correct history once async fetch completes. - setMessages(EMPTY_MESSAGES); + setMessages(EMPTY_MESSAGES); setIsLoading(false); setError(null); + setErrorCode(null); setToolStatus(null); setPendingApprovals([]); setActivity([]); + setContextWindowWarning(null); setConversationId(externalConversationId); abortControllerRef.current = null; requestTokenRef.current = 0; @@ -236,6 +253,28 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { setIsLoading(false); } + function startNewConversation() { + requestTokenRef.current += 1; + abortControllerRef.current?.abort(); + abortControllerRef.current = null; + backgroundStreams.delete(cacheKeyRef.current); + backgroundStates.delete(cacheKeyRef.current); + + setMessages(EMPTY_MESSAGES); + setIsLoading(false); + setError(null); + setErrorCode(null); + setToolStatus(null); + setPendingApprovals([]); + setActivity([]); + setContextWindowWarning(null); + setConversationId(null); + + conversationIdRef.current = null; + messageCounterRef.current = 0; + activityCounterRef.current = 0; + } + async function resolveApproval(requestId: string, decision: ApprovalDecision): Promise { // Capture the tool name before removing the approval so we can show // a user-friendly status ("Writing to your library...") instead of "Approval approved" @@ -269,6 +308,7 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { }; setError(null); + setErrorCode(null); setIsLoading(true); setToolStatus("Running slash command..."); setMessages((current) => [...current, userMessage]); @@ -289,6 +329,7 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { setToolStatus(null); } catch (error) { setError(toError(error)); + setErrorCode(null); setToolStatus(null); } finally { setIsLoading(false); @@ -315,7 +356,9 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { const assistantMessageId = nextMessageId(); setError(null); + setErrorCode(null); setIsLoading(true); + setContextWindowWarning(null); setMessages((current) => [...current, userMessage]); // Track this as a background stream so state updates route correctly @@ -392,7 +435,17 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { try { for await (const event of sendMessage(conversationIdRef.current, trimmed, { signal: controller.signal, - metadata: options?.metadata + metadata: options?.metadata, + onContextWarning: (warning) => { + if (isActive()) { + setContextWindowWarning(warning); + return; + } + + updateBackground(() => ({ + contextWindowWarning: warning, + })); + }, })) { if (requestToken !== requestTokenRef.current && isActive()) { return; @@ -461,11 +514,13 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { case "done": if (isActive()) { setToolStatus(null); + setErrorCode(null); updateConversationId(event.conversation_id); } else { updateBackground(() => ({ toolStatus: null, isLoading: false, + errorCode: null, conversationId: event.conversation_id ?? null, })); } @@ -475,11 +530,13 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { if (isActive()) { setToolStatus(null); setError(new Error(event.message)); + setErrorCode(event.code); } else { updateBackground(() => ({ toolStatus: null, isLoading: false, error: new Error(event.message), + errorCode: event.code, })); } backgroundStreams.delete(activeCacheKey); @@ -512,13 +569,32 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { }); recordActivity({ type: "approval-request", - message: `Approval required for ${humanizeToolName(event.tool_name)}`, + message: `Auto-approving ${humanizeToolName(event.tool_name)}`, }); if (isActive()) { setToolStatus(event.tool_name); } else { updateBackground(() => ({ toolStatus: event.tool_name })); } + try { + await submitApprovalDecision(event.request_id, "approved"); + } catch (approvalError) { + controller.abort(); + if (isActive()) { + setToolStatus(null); + setError(toError(approvalError)); + setErrorCode(null); + } else { + updateBackground(() => ({ + toolStatus: null, + isLoading: false, + error: toError(approvalError), + errorCode: null, + })); + } + backgroundStreams.delete(activeCacheKey); + return; + } break; case "approval-result": removePendingApproval(event.request_id); @@ -540,6 +616,7 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { updateBackground(() => ({ isLoading: false, error: toError(caughtError), + errorCode: null, })); backgroundStreams.delete(activeCacheKey); return; @@ -549,6 +626,7 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { } setError(toError(caughtError)); + setErrorCode(null); } finally { backgroundStreams.delete(activeCacheKey); if (isActive() && requestToken === requestTokenRef.current) { @@ -566,13 +644,16 @@ export function useGatewayChat(options: UseGatewayChatOptions = {}): { messages, isLoading, error, + errorCode, conversationId, toolStatus, pendingApprovals, activity, + contextWindowWarning, append, resolveApproval, - stop + stop, + startNewConversation }; } diff --git a/builds/typescript/client_web/src/components/chat/ChatPanel.test.tsx b/builds/typescript/client_web/src/components/chat/ChatPanel.test.tsx index 69f9ace..df38cd6 100644 --- a/builds/typescript/client_web/src/components/chat/ChatPanel.test.tsx +++ b/builds/typescript/client_web/src/components/chat/ChatPanel.test.tsx @@ -13,19 +13,32 @@ vi.mock("@/api/useGatewayChat", () => ({ function makeHookState(overrides: Partial<{ messages: Message[]; isLoading: boolean; + error: Error | null; + errorCode: string | null; toolStatus: string | null; + contextWindowWarning: { + estimated_tokens: number; + budget_tokens: number; + ratio: number; + threshold: number; + managed: boolean; + message: string; + } | null; }> = {}) { return { messages: overrides.messages ?? [], isLoading: overrides.isLoading ?? false, - error: null, + error: overrides.error ?? null, + errorCode: overrides.errorCode ?? null, conversationId: null, toolStatus: overrides.toolStatus ?? null, pendingApprovals: [], activity: [], + contextWindowWarning: overrides.contextWindowWarning ?? null, append: vi.fn(), resolveApproval: vi.fn(async () => undefined), stop: vi.fn(), + startNewConversation: vi.fn(), }; } @@ -62,4 +75,41 @@ describe("ChatPanel typing indicator behavior", () => { expect(screen.queryByText("Thinking...")).not.toBeInTheDocument(); }); + + it("shows context warning banner when near limit", () => { + useGatewayChatMock.mockReturnValue( + makeHookState({ + contextWindowWarning: { + estimated_tokens: 80_000, + budget_tokens: 100_000, + ratio: 0.8, + threshold: 0.8, + managed: false, + message: "This session is getting long.", + }, + }) + ); + + render(); + + expect(screen.getByText("This session is getting long.")).toBeInTheDocument(); + expect(screen.getByRole("button", { name: "Start New Conversation" })).toBeInTheDocument(); + }); + + it("shows overflow-specific recovery actions", () => { + useGatewayChatMock.mockReturnValue( + makeHookState({ + messages: [{ id: "u-1", role: "user", content: "Continue from this prompt" }], + error: new Error("This session has gotten long."), + errorCode: "context_overflow", + }) + ); + + render(); + + expect(screen.getByRole("button", { name: "Start New Conversation" })).toBeInTheDocument(); + expect(screen.getByRole("button", { name: "Continue in New Conversation" })).toBeInTheDocument(); + expect(screen.queryByRole("button", { name: "Open Settings" })).not.toBeInTheDocument(); + expect(screen.queryByRole("button", { name: "Try Again" })).not.toBeInTheDocument(); + }); }); diff --git a/builds/typescript/client_web/src/components/chat/ChatPanel.tsx b/builds/typescript/client_web/src/components/chat/ChatPanel.tsx index 352600b..fc03bf4 100644 --- a/builds/typescript/client_web/src/components/chat/ChatPanel.tsx +++ b/builds/typescript/client_web/src/components/chat/ChatPanel.tsx @@ -1,5 +1,5 @@ import { useEffect, useRef, useState, type CSSProperties, type DragEvent, type ReactNode } from "react"; -import { CheckCircle2, FileText, LoaderCircle, ShieldAlert, XCircle } from "lucide-react"; +import { FileText } from "lucide-react"; import { createPortal } from "react-dom"; import { @@ -82,8 +82,6 @@ export default function ChatPanel({ const [historyMessages, setHistoryMessages] = useState([]); const [historyError, setHistoryError] = useState(null); const [dismissedError, setDismissedError] = useState(null); - const [approvalError, setApprovalError] = useState(null); - const [resolvingApprovalId, setResolvingApprovalId] = useState(null); const wasLoadingRef = useRef(false); const completedConversationIdRef = useRef(null); @@ -91,12 +89,14 @@ export default function ChatPanel({ messages, isLoading, error, + errorCode, conversationId, toolStatus, pendingApprovals, + contextWindowWarning, append, - resolveApproval, - stop + stop, + startNewConversation, } = useGatewayChat({ conversationId: activeConversationId, projectId: activeProjectId ?? null, @@ -152,11 +152,6 @@ export default function ChatPanel({ setDismissedError(null); }, [error, historyError]); - useEffect(() => { - setApprovalError(null); - setResolvingApprovalId(null); - }, [activeConversationId]); - useEffect(() => { if (wasLoadingRef.current && !isLoading && !error && conversationId) { if (completedConversationIdRef.current !== conversationId) { @@ -175,7 +170,7 @@ export default function ChatPanel({ const lastMessage = messages.length > 0 ? messages[messages.length - 1] : null; const hasStartedAssistantReply = isLoading && lastMessage?.role === "assistant"; const isWaitingForReply = isLoading && !hasStartedAssistantReply; - const showTypingFeedback = isLoading && pendingApprovals.length === 0; + const showTypingFeedback = isWaitingForReply && pendingApprovals.length === 0; const typingStatus = isLoading ? toolStatus ? formatToolStatus(toolStatus) @@ -184,15 +179,38 @@ export default function ChatPanel({ const chatError = historyError ?? error?.message ?? null; const visibleChatError = chatError && chatError !== dismissedError ? chatError : null; + const isContextOverflowError = errorCode === "context_overflow"; const isProviderError = visibleChatError != null && ( visibleChatError.includes("credentials") || visibleChatError.includes("could not be reached") || visibleChatError.includes("provider") || visibleChatError.includes("model") - ); + ) && !isContextOverflowError; + const lastUserMessage = [...messages].reverse().find((message) => message.role === "user") ?? null; const shouldShowEmptyState = isEmpty && messages.length === 0 && !isLoading; const shouldShowConversation = contentOverride === undefined; + function resetErrorPresentation() { + setHistoryError(null); + if (visibleChatError) { + setDismissedError(visibleChatError); + } + setConnectionStatus("connected"); + } + + function handleStartNewConversation() { + resetErrorPresentation(); + startNewConversation(); + } + + function handleContinueInNewConversation() { + const replayContent = lastUserMessage?.content?.trim(); + handleStartNewConversation(); + if (replayContent && replayContent.length > 0) { + append(replayContent, { metadata: messageMetadata }); + } + } + function handleDragOver(e: DragEvent) { e.preventDefault(); setIsDragOver(true); @@ -235,25 +253,6 @@ export default function ChatPanel({ setAttachment(attached); } - async function handleApprovalDecision( - requestId: string, - decision: "approved" | "denied" - ): Promise { - setApprovalError(null); - setResolvingApprovalId(requestId); - try { - await resolveApproval(requestId, decision); - } catch (decisionError) { - setApprovalError( - decisionError instanceof Error - ? decisionError.message - : "Failed to submit approval decision" - ); - } finally { - setResolvingApprovalId((current) => (current === requestId ? null : current)); - } - } - const composerProps = { onSend: (message: string, file?: File) => { if (file) { @@ -350,80 +349,44 @@ export default function ChatPanel({ isTyping={showTypingFeedback} typingStatus={typingStatus} > + {contextWindowWarning && !visibleChatError && ( +
+
+

+ {contextWindowWarning.message}{" "} + + ({Math.round(contextWindowWarning.ratio * 100)}% of current prompt budget) + +

+ +
+
+ )} {visibleChatError && ( { - setHistoryError(null); - setDismissedError(visibleChatError); - setConnectionStatus("connected"); - }} + onRetry={isContextOverflowError ? undefined : () => resetErrorPresentation()} + primaryActionLabel={isContextOverflowError ? "Start New Conversation" : undefined} + onPrimaryAction={isContextOverflowError ? handleStartNewConversation : undefined} + secondaryActionLabel={ + isContextOverflowError && lastUserMessage ? "Continue in New Conversation" : undefined + } + onSecondaryAction={ + isContextOverflowError && lastUserMessage ? handleContinueInNewConversation : undefined + } onDismiss={() => { setHistoryError(null); setDismissedError(visibleChatError); }} /> )} - {approvalError && ( - setApprovalError(null)} - onDismiss={() => setApprovalError(null)} - /> - )} - {pendingApprovals.length > 0 && ( -
-
- - Approval Required -
-
- {pendingApprovals.map((approval) => { - const isResolving = resolvingApprovalId === approval.requestId; - return ( -
-
- {approval.toolName.replace(/_/g, " ")} -
-

{approval.summary}

-
- - -
-
- ); - })} -
-
- )} )) : contentOverride} diff --git a/builds/typescript/client_web/src/components/chat/ErrorMessage.tsx b/builds/typescript/client_web/src/components/chat/ErrorMessage.tsx index 828939f..7552fab 100644 --- a/builds/typescript/client_web/src/components/chat/ErrorMessage.tsx +++ b/builds/typescript/client_web/src/components/chat/ErrorMessage.tsx @@ -5,13 +5,21 @@ type ErrorMessageProps = { onRetry?: () => void; onDismiss?: () => void; onOpenSettings?: () => void; + primaryActionLabel?: string; + onPrimaryAction?: () => void; + secondaryActionLabel?: string; + onSecondaryAction?: () => void; }; export default function ErrorMessage({ message, onRetry, onDismiss, - onOpenSettings + onOpenSettings, + primaryActionLabel, + onPrimaryAction, + secondaryActionLabel, + onSecondaryAction, }: ErrorMessageProps) { return (
@@ -24,6 +32,24 @@ export default function ErrorMessage({

{message}

+ {onPrimaryAction && primaryActionLabel && ( + + )} + {onSecondaryAction && secondaryActionLabel && ( + + )} {onOpenSettings && (
@@ -390,6 +429,9 @@ export default function SettingsModal({ onSaveSettings={saveSettings} onRefreshCatalog={() => setCatalogRefreshKey((k) => k + 1)} onSaveCredential={saveCredential} + onSaveMemoryBackupSettings={saveMemoryBackupSettings} + onRunMemoryBackupNow={triggerMemoryBackupNow} + onRestoreMemoryBackup={triggerMemoryBackupRestore} onDownloadExport={handleDownloadExport} isExporting={isExporting} exportError={exportError} @@ -474,6 +516,9 @@ function TabContent({ modelCatalogError, onSaveSettings, onSaveCredential, + onSaveMemoryBackupSettings, + onRunMemoryBackupNow, + onRestoreMemoryBackup, onDownloadExport, isExporting, exportError, @@ -498,6 +543,13 @@ function TabContent({ patch: SettingsPatch ) => Promise; onSaveCredential: (patch: GatewayCredentialUpdateRequest) => Promise; + onSaveMemoryBackupSettings: ( + payload: GatewayMemoryBackupSettingsUpdateRequest + ) => Promise; + onRunMemoryBackupNow: () => Promise; + onRestoreMemoryBackup: ( + payload?: GatewayMemoryBackupRestoreRequest + ) => Promise; onDownloadExport: () => Promise; isExporting: boolean; exportError: string | null; @@ -544,6 +596,18 @@ function TabContent({ onNavigateToTab={onNavigateToTab} /> ); + case "memory-backup": + return ( + + ); case "profile": return ; case "account": @@ -566,6 +630,289 @@ function TabContent({ } } +function MemoryBackupSection({ + mode, + settings, + isLoadingSettings, + settingsError, + onSaveMemoryBackupSettings, + onRunMemoryBackupNow, + onRestoreMemoryBackup, +}: { + mode: "local" | "managed"; + settings: GatewaySettings | null; + isLoadingSettings: boolean; + settingsError: string | null; + onSaveMemoryBackupSettings: ( + payload: GatewayMemoryBackupSettingsUpdateRequest + ) => Promise; + onRunMemoryBackupNow: () => Promise; + onRestoreMemoryBackup: ( + payload?: GatewayMemoryBackupRestoreRequest + ) => Promise; +}) { + const [repositoryUrl, setRepositoryUrl] = useState(""); + const [frequency, setFrequency] = useState("manual"); + const [token, setToken] = useState(""); + const [isSavingSettings, setIsSavingSettings] = useState(false); + const [settingsActionError, setSettingsActionError] = useState(null); + const [settingsActionSuccess, setSettingsActionSuccess] = useState(null); + const [isSavingNow, setIsSavingNow] = useState(false); + const [saveNowMessage, setSaveNowMessage] = useState(null); + const [saveNowError, setSaveNowError] = useState(null); + const [isRestoring, setIsRestoring] = useState(false); + const [restoreMessage, setRestoreMessage] = useState(null); + const [restoreError, setRestoreError] = useState(null); + + const backupSettings = settings?.memory_backup ?? null; + + useEffect(() => { + setRepositoryUrl(backupSettings?.repository_url ?? ""); + setFrequency(backupSettings?.frequency ?? "manual"); + }, [backupSettings?.repository_url, backupSettings?.frequency]); + + if (mode !== "local") { + return null; + } + + if (isLoadingSettings) { + return ( +
+

Memory Backup

+

Loading backup settings...

+
+ ); + } + + if (settingsError) { + return ( +
+

Memory Backup

+
+ {settingsError} +
+
+ ); + } + + const lastSave = backupSettings?.last_save_at + ? new Date(backupSettings.last_save_at).toLocaleString() + : "Never"; + const lastResult = backupSettings?.last_result ?? "never"; + const statusText = + lastResult === "success" + ? "Success" + : lastResult === "failed" + ? "Failed" + : "Never run"; + const frequencyOptions: Array<{ value: GatewayMemoryBackupFrequency; label: string }> = [ + { value: "manual", label: "Manual" }, + { value: "after_changes", label: "After changes" }, + { value: "hourly", label: "Every hour" }, + { value: "daily", label: "Every day" }, + ]; + + return ( +
+
+

Memory Backup

+

+ Configure a git repository and token to back up memory snapshots. +

+
+ +
+ + { + setRepositoryUrl(event.target.value); + setSettingsActionError(null); + setSettingsActionSuccess(null); + }} + placeholder="https://github.com/your-org/your-memory-backup.git" + className="h-10 w-full rounded-lg border border-bd-border bg-bd-bg-tertiary px-3 text-sm text-bd-text-primary outline-none focus:border-bd-amber" + /> +
+ +
+ + { + setToken(event.target.value); + setSettingsActionError(null); + setSettingsActionSuccess(null); + }} + placeholder={backupSettings?.token_configured ? "Leave blank to keep current token" : "Paste token"} + className="h-10 w-full rounded-lg border border-bd-border bg-bd-bg-tertiary px-3 text-sm text-bd-text-primary outline-none focus:border-bd-amber" + /> +
+ +
+ + +
+ +
+
+ Last successful save + {lastSave} +
+
+ Status + {statusText} +
+ {backupSettings?.last_error && ( +
{backupSettings.last_error}
+ )} +
+ +
+ + + +
+ + {settingsActionError && ( +
+ {settingsActionError} +
+ )} + {settingsActionSuccess && ( +
+ {settingsActionSuccess} +
+ )} + {saveNowError && ( +
+ {saveNowError} +
+ )} + {saveNowMessage && ( +
+ {saveNowMessage} +
+ )} + {restoreError && ( +
+ {restoreError} +
+ )} + {restoreMessage && ( +
+ {restoreMessage} +
+ )} +
+ ); +} + function BrainDriveDefaultSection({ settings, isLoadingSettings, @@ -648,7 +995,7 @@ function BrainDriveDefaultSection({ BrainDrive

- Currently powered by Claude Sonnet 4.6 + Currently powered by Claude Haiku 4.5

@@ -755,7 +1102,7 @@ function BrainDriveDefaultSection({ BrainDrive

- Currently powered by Claude Sonnet 4.6 + Currently powered by Claude Haiku 4.5

@@ -1021,7 +1368,7 @@ function ProviderSection({
{isBrainDriveModels - ? <>Currently powered by Claude Sonnet 4.6 + ? <>Currently powered by Claude Haiku 4.5 : isOllama ? <>Runs on your computer, free — e.stopPropagation()}>ollama.com : <>Cloud-based, requires API key — e.stopPropagation()}>openrouter.ai/keys} @@ -1046,12 +1393,33 @@ function ProviderSection({ <> {isOllama && (
- +
+ +
+ +
+ If BrainDrive is running in Docker and Ollama is installed on this computer, use + {" "} + + http://host.docker.internal:11434/v1 + +
+
+