🤖 feat: persist per-workspace model + thinking #1203
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Persist per-workspace model and thinking/reasoning level on the backend so that opening the same workspace from another browser/device restores the same AI configuration.
Changes
WorkspaceAISettings(model,thinkingLevel) added to workspace config and metadata schemas.workspace.updateAISettingsto explicitly persist settings changes.sendMessageandresumeStreamalso persist "last used" settings for older/CLI clients.thinkingLevel:{workspaceId}) with migration from legacy per-model keys.Testing
ThinkingContext.test.tsxfor workspace-scoped thinking + migration.WorkspaceContext.test.tsxtest for backend metadata seeding localStorage.tests/ipc/workspaceAISettings.test.tsverifying persistence + list/getInfo.📋 Implementation Plan
Persist per-workspace model + reasoning (thinking) on the backend
Goal
Make model selection and thinking/reasoning level:
Non-goals (for this change): persisting provider options (e.g. truncation), global default model, or draft input.
Recommended approach (net +250–400 LoC product)
1) Define a single workspace AI settings shape (shared types)
Why: avoid ad-hoc keys and keep the IPC/API boundary strongly typed.
Add a reusable schema/type (common):
WorkspaceAISettings:model: string(canonicalprovider:model, notmux-gateway:provider/model)thinkingLevel: ThinkingLevel(off|low|medium|high|xhigh)Extend persisted config shape:
WorkspaceConfigSchema(insrc/common/orpc/schemas/project.ts): add optionalaiSettings?: WorkspaceAISettingsExtend workspace metadata returned to clients:
WorkspaceMetadataSchema(insrc/common/orpc/schemas/workspace.ts): add optionalaiSettings?: WorkspaceAISettingsFrontendWorkspaceMetadataSchema.extend(...))2) Backend: persist + serve workspace AI settings
Persistence location:
~/.mux/config.jsonunder each workspace entry (alongsideruntimeConfig,mcp, etc.)2.1 Add an API to update settings explicitly
Add a new ORPC endpoint:
workspace.updateAISettings(name bikesheddable){ workspaceId: string, aiSettings: WorkspaceAISettings }Result<void, string>Node implementation (
WorkspaceService):config.findWorkspace(workspaceId).config.editConfig(...)to locate the workspace entry and setworkspaceEntry.aiSettings = normalizedSettings.model = normalizeGatewayModel(model)(fromsrc/common/utils/ai/models.ts)thinkingLevel = enforceThinkingPolicy(model, thinkingLevel)(single source of truth)config.getAllWorkspaceMetadata()and emitonMetadataupdate only if the value changed (avoid spam).2.2 Also persist “last used” settings on message send (safety net)
Even if a client forgets to call
updateAISettings(CLI/extension/old client), the backend should learn the last used values.WorkspaceService.sendMessage(...)andresumeStream(...):options.model+options.thinkingLeveland write toworkspaceEntry.aiSettings(same normalization as above).3) Frontend: treat backend as source of truth, but keep localStorage as a fast cache
3.1 Change thinking persistence to be workspace-scoped
Add a new storage helper:
getThinkingLevelKey(scopeId: string): string→thinkingLevel:${scopeId}PERSISTENT_WORKSPACE_KEY_FUNCTIONSto include it (so fork copies it, delete removes it).Update
ThinkingProviderto use scope-based keying:ModeProvider:thinkingLevel:{workspaceId}thinkingLevel:__project__/{projectPath}Update non-React send option reader:
getSendOptionsFromStorage(...)should read thinking viagetThinkingLevelKey(scopeId).Update UI copy:
3.2 Seed localStorage from backend workspace metadata
Where:
WorkspaceContext.loadWorkspaceMetadata()+ theworkspace.onMetadatasubscription handler.metadata.aiSettings?.modelexists → write it tolocalStoragekeymodel:{workspaceId}.metadata.aiSettings?.thinkingLevelexists → write it tothinkingLevel:{workspaceId}.updatePersistedState).This ensures:
3.3 Persist changes back to the backend when the user changes the UI
Model changes:
ChatInput’ssetPreferredModel(...)(workspace variant only):api.workspace.updateAISettings({ workspaceId, aiSettings: { model, thinkingLevel: currentThinking } })Thinking changes:
setThinkingLevelinThinkingProvider(workspace variant only):api.workspace.updateAISettings(...)with{ model: currentModel, thinkingLevel: newLevel }Notes:
enforceThinkingPolicyclient-side too (keep UI/BE consistent) but backend remains final authority.apiis unavailable, keep localStorage update (offline-friendly) and rely on sendMessage persistence later.3.4 Creation flow
syncCreationPreferencesto include thinking).workspace.updateAISettings(...)right after creation (best effort) so the workspace is immediately portable even before the first message sends.4) Migration + compatibility
aiSettingsfields are optional → old configs load fine; old mux versions should ignore unknown fields.thinkingLevel:{workspaceId}is missing:model:{workspaceId}(or default)thinkingLevel:model:{model}thinkingLevel:{workspaceId}to that valuePut this migration in a single place (e.g., the same “seed from metadata” helper) so it runs once.
Validation / tests
src/browser/contexts/ThinkingContext.test.tsxshould instead verify thinking is stable across model changes (except clamping).workspace.updateAISettings:workspace.list/getInfoaiSettings.Alternatives considered
A) Persist only on sendMessage (no new endpoint) (net +120–200 LoC)
aiSettingsfromsendMessage/resumeStreamoptions.Pros: less surface area.
Cons: doesn’t sync if user changes model/thinking but hasn’t sent yet; stale localStorage on an existing device may never converge.
B) Remove localStorage for model/thinking entirely (net +500–900 LoC)
usePersistedState(getModelKey(...))usage with a workspace settings store sourced from backend.Pros: true single source of truth.
Cons: much bigger refactor; riskier.
Generated with
mux• Model:openai:gpt-5.2• Thinking:high