feat(config): runtime RPC URL configuration bootstrap (Issue #933)#948
Conversation
…integrate into configuration workflows
📝 WalkthroughWalkthroughAdds runtime-configurable Core JSON‑RPC URL and API key: UI to test and persist a user-provided core RPC URL, client RPC to return safe config fields, persistence/validation utilities, coreRpcClient preference for stored URL with cache clearing, and threading of api_key through provider factories and config persistence. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant Welcome as Welcome UI
participant Utils as Config Utils (normalize/isValid)
participant Storage as localStorage
participant RPC as Core RPC (openhuman.ping)
participant Provider as Provider Factory
User->>Welcome: Open Advanced Config
Welcome->>Storage: getStoredRpcUrl()
Storage-->>Welcome: stored or default URL
User->>Welcome: Enter URL -> Test
Welcome->>Utils: normalizeRpcUrl()
Utils-->>Welcome: normalized URL
Welcome->>Utils: isValidRpcUrl()
Utils-->>Welcome: valid/invalid
alt valid
Welcome->>RPC: POST openhuman.ping (to endpoint)
RPC-->>Welcome: 200-299 or 405 -> success
Welcome->>Storage: storeRpcUrl()
Storage-->>Storage: persist URL
Welcome->>Provider: clearCoreRpcUrlCache() -> provider will use stored URL
Provider-->>Welcome: provider ready (uses API key if present)
else invalid
Welcome-->>User: show error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/openhuman/config/schemas.rs (1)
558-572: 🛠️ Refactor suggestion | 🟠 MajorUse the existing
deserialize_paramshelper for consistent error shape.Every other
handle_*in this file routes params throughdeserialize_params::<T>(), which (1) avoids cloning theMapviajson!(params)and (2) produces the conventional"invalid params: …"prefix that tests pin (deserialize_params_rejects_wrong_types_with_invalid_params_prefix). The new inline path returns a baree.to_string(), which silently breaks that contract forupdate_model_settingsonly.fn handle_update_model_settings(params: Map<String, Value>) -> ControllerFuture { Box::pin(async move { - let update: ModelSettingsUpdate = match serde_json::from_value(serde_json::json!(params)) { - Ok(u) => u, - Err(e) => return Err(e.to_string()), - }; + let update = deserialize_params::<ModelSettingsUpdate>(params)?; let patch = config_rpc::ModelSettingsPatch { api_url: update.api_url, api_key: update.api_key, default_model: update.default_model, default_temperature: update.default_temperature, }; to_json(config_rpc::load_and_apply_model_settings(patch).await?) }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/config/schemas.rs` around lines 558 - 572, The handler handle_update_model_settings bypasses the project's deserialize_params helper and instead parses params via serde_json::json!(params), which clones the Map and returns raw error strings; replace that inline parsing with deserialize_params::<ModelSettingsUpdate>(params) to get the standard "invalid params: …" error shape and avoid cloning, then construct the config_rpc::ModelSettingsPatch from the deserialized ModelSettingsUpdate and call config_rpc::load_and_apply_model_settings(patch).await? followed by to_json(...) as before so the function's return type (ControllerFuture) and behavior remain unchanged.app/src/services/coreRpcClient.ts (1)
92-105:⚠️ Potential issue | 🟠 MajorCached RPC URL can become stale after user reconfiguration.
On Line 92,
getCoreRpcUrl()returnsresolvedCoreRpcUrlwithout re-checking persisted config. After a user changes RPC URL at runtime, calls may keep hitting the old endpoint until full app reload.💡 Proposed fix (cache invalidation + refresh path)
let resolvedCoreRpcUrl: string | null = null; let resolvingCoreRpcUrl: Promise<string> | null = null; + +export function resetCoreRpcUrlCache(): void { + resolvedCoreRpcUrl = null; + resolvingCoreRpcUrl = null; +} export async function getCoreRpcUrl(): Promise<string> { + const storedUrl = getStoredRpcUrl(); + if (storedUrl && storedUrl !== resolvedCoreRpcUrl) { + resolvedCoreRpcUrl = storedUrl; + return storedUrl; + } + if (resolvedCoreRpcUrl) { return resolvedCoreRpcUrl; }Then call
resetCoreRpcUrlCache()right after saving a new RPC URL in the settings flow.Also applies to: 111-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/services/coreRpcClient.ts` around lines 92 - 105, getCoreRpcUrl currently returns the cached resolvedCoreRpcUrl without re-reading persisted config, so after a user updates the RPC URL the app keeps using the stale value; ensure cache invalidation by (A) adding a resetCoreRpcUrlCache() call immediately after any settings flow that saves a new RPC URL, and (B) optionally making getCoreRpcUrl re-check persisted value if resolvedCoreRpcUrl exists but differs from getStoredRpcUrl()/CORE_RPC_URL (use getStoredRpcUrl, CORE_RPC_URL, coreIsTauri, and resolvedCoreRpcUrl to compare); implement resetCoreRpcUrlCache() to clear resolvedCoreRpcUrl and call it from the settings save handler so subsequent getCoreRpcUrl calls reflect the updated URL.
🧹 Nitpick comments (8)
app/src/components/settings/panels/local-model/CustomModelSection.tsx (1)
38-54: Clear the success-bannersetTimeouton unmount / re-save.If the user navigates away (or saves again) within the 3s window,
setSaveSuccess(false)runs against an unmounted component (or stomps a fresh save). Track the timer in a ref and clear it.♻️ Proposed fix
-import { useEffect, useState } from 'react'; +import { useEffect, useRef, useState } from 'react'; ... const [saveSuccess, setSaveSuccess] = useState(false); + const successTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null); + + useEffect(() => { + return () => { + if (successTimerRef.current) clearTimeout(successTimerRef.current); + }; + }, []); ... const handleSave = async () => { setIsSaving(true); setError(''); setSaveSuccess(false); + if (successTimerRef.current) clearTimeout(successTimerRef.current); try { await openhumanUpdateModelSettings({ api_url: apiUrl.trim() || null, api_key: apiKey.trim() || null, }); setSaveSuccess(true); - setTimeout(() => setSaveSuccess(false), 3000); + successTimerRef.current = setTimeout(() => setSaveSuccess(false), 3000); } catch (err) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx` around lines 38 - 54, The setTimeout in handleSave that calls setSaveSuccess(false) can fire after unmount or overwrite a subsequent save; track the timer id in a ref (e.g., saveSuccessTimerRef), clear any existing timer before creating a new setTimeout, store the new id in the ref, and add a useEffect cleanup that clears saveSuccessTimerRef.current on unmount; update handleSave (and the path where you setSaveSuccess(true)) to clear the ref first and use the ref when clearing the timer instead of leaving timeouts unmanaged (references: handleSave, openhumanUpdateModelSettings, setSaveSuccess).src/openhuman/config/schema/local_ai.rs (1)
12-15: Documentapi_keyas a secret (and clarify whatbase_urloverrides).Both fields are sparsely typed (
Option<String>); a one-liner doc comment helps callers avoid logging the API key and clarifies thatbase_urlis the OpenAI-compatible endpoint override (vs. theproviderselector). Same secret-handling concern asConfig::api_key.- #[serde(default)] - pub base_url: Option<String>, - #[serde(default)] - pub api_key: Option<String>, + /// Override base URL for an OpenAI-compatible local backend (e.g. + /// vLLM/LiteLLM). Ignored when the configured `provider` does not + /// understand a base URL. + #[serde(default)] + pub base_url: Option<String>, + /// Optional API key for the configured local-AI backend. + /// **Secret** — never log this value. + #[serde(default)] + pub api_key: Option<String>,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/config/schema/local_ai.rs` around lines 12 - 15, Add one-line Rust doc comments above the LocalAI config fields to document intent: mark api_key as a secret (do not log/print it and treat it like Config::api_key) and clarify that base_url is an OpenAI-compatible endpoint override (used instead of provider selection). Update the comments located on the struct fields named base_url and api_key in local_ai.rs to state those behaviors so callers know to avoid logging the API key and that base_url overrides the endpoint.app/src/utils/__tests__/configPersistence.test.ts (1)
13-13: Duplicated magic-string storage key.
STORAGE_KEYhere mirrors the privateRPC_URL_STORAGE_KEYinapp/src/utils/configPersistence.ts. If the production constant ever changes, these tests will silently keep passing while testing the wrong key. Consider exporting the key from the module (e.g.,RPC_URL_STORAGE_KEYor via a__test__re-export) and importing it here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/utils/__tests__/configPersistence.test.ts` at line 13, The test defines a duplicated magic string STORAGE_KEY that mirrors the private RPC_URL_STORAGE_KEY in configPersistence.ts; instead export the canonical constant (e.g., RPC_URL_STORAGE_KEY or a __test__ re-export) from the module and import that constant into the test instead of defining STORAGE_KEY locally so the test always uses the real production key (update the test to import RPC_URL_STORAGE_KEY and remove the local STORAGE_KEY).app/src/pages/Welcome.tsx (1)
80-106: No timeout on the test fetch — UI can hang indefinitely.If the user enters an unreachable host that just stalls (DNS blackhole, dropped SYN), this
fetchhas noAbortControllerand will sit in "Testing" until the OS times out (often 60–120s). Wrap the request with anAbortController+setTimeout(e.g., 5s) and surface a clean "Connection timed out" error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Welcome.tsx` around lines 80 - 106, The fetch in the connection test (the block that calls fetch(normalized, { method: 'POST', ... })) has no timeout; wrap it with an AbortController and a setTimeout (e.g., 5000ms) so the request is aborted if it takes too long, clear the timeout on success, and call controller.abort() in cleanup; catch an abort error and call setRpcUrlError('Connection timed out') (or similar) and ensure setIsTestingConnection(false) still runs in the finally block; update the logic around the existing response.ok / status checks and the existing handlers setConnectionTested and storeRpcUrl to run only if not aborted.app/src/utils/configPersistence.ts (2)
76-88:isValidRpcUrlaccepts URLs that the rest of the flow rejects.
new URL('http://')throws (invalid), butnew URL('http://x')is accepted as valid even though it has no path and may still 404 against/rpc. More importantly,isValidRpcUrldoesn't enforce that the URL has a host, which means strings likehttp:///rpc(parsed by some engines) can slip through. Consider also requiringparsed.hostname.length > 0.try { const parsed = new URL(url); - // Must be http or https - return parsed.protocol === 'http:' || parsed.protocol === 'https:'; + return ( + (parsed.protocol === 'http:' || parsed.protocol === 'https:') && + parsed.hostname.length > 0 + ); } catch {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/utils/configPersistence.ts` around lines 76 - 88, The isValidRpcUrl function currently only checks protocol and allows URLs without a host or path; update it to validate that the parsed URL has a non-empty hostname (parsed.hostname.length > 0) and also ensure it has a non-empty pathname (parsed.pathname !== '' or parsed.pathname !== '/') so endpoints like "http://x" or "http:///" are rejected; modify isValidRpcUrl accordingly to return true only when protocol is http/https and parsed.hostname and parsed.pathname meet these checks.
47-60: Consider Redux-persist for app state instead of ad-hoc localStorage.The repository's coding guideline prefers Redux + persist for app state over ad-hoc
localStorage. The RPC URL is technically bootstrap config (read before the store is ready), which is a defensible reason to stay on rawlocalStoragehere, but consider mirroring it into the Redux slice once the store hydrates so consumers (e.g., a future settings page) read from a single source of truth.As per coding guidelines: "Prefer Redux and persist where configured for app state over ad hoc
localStoragefor app state".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/utils/configPersistence.ts` around lines 47 - 60, The storeRpcUrl function currently writes directly to localStorage using RPC_URL_STORAGE_KEY; update the flow so storeRpcUrl still writes to localStorage for bootstrap use but after the Redux store hydrates also dispatches an action to mirror the value into the Redux settings slice (e.g., dispatch an action like setRpcUrl) so consumers read from store; ensure clearing logic (localStorage.removeItem) also dispatches the corresponding clear/reset action, and guard the dispatch path so it only runs if the store/dispatch is available (to preserve bootstrap behavior when store is not yet initialized).src/openhuman/routing/factory.rs (1)
43-43: Line length / readability nit.This single-line
matches!chain runs ~140 columns and mixes the env-var override boolean with the provider-kind list. Splitting it is easier to read and also matches existing rustfmt defaults in this repo.- let use_openai_compat_local = override_base.is_some() || matches!(provider_kind.as_str(), "llamacpp" | "llama-server" | "custom_openai"); + let use_openai_compat_local = override_base.is_some() + || matches!( + provider_kind.as_str(), + "llamacpp" | "llama-server" | "custom_openai" + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/routing/factory.rs` at line 43, The current single-line assignment to use_openai_compat_local mixes override_base and a long matches! check which hurts readability; refactor by first computing a boolean like has_override = override_base.is_some() (or similar) and a second boolean like is_local_provider = matches!(provider_kind.as_str(), "llamacpp" | "llama-server" | "custom_openai"), then set use_openai_compat_local = has_override || is_local_provider so the logic is identical but the expression is split for clarity and line-length compliance.CHANGELOG.md (1)
12-12: Phrasing nit on the Unreleased entry."Bootstrap from config.toml RPC URL with runtime derivation" reads ambiguously — the actual feature is the opposite direction (allow runtime override of the RPC URL without editing
config.toml). Consider rewording, e.g., "Runtime-configurable Core RPC URL (Issue#933) — set the URL from the Welcome screen without editingconfig.toml."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` at line 12, Update the Unreleased changelog entry that currently reads "Bootstrap from config.toml RPC URL with runtime derivation" to a clearer phrase that reflects the actual feature; replace that line (the "Bootstrap from config.toml RPC URL with runtime derivation" text) with something like "Runtime-configurable Core RPC URL (Issue `#933`) — set the URL from the Welcome screen without editing `config.toml`" so it clearly states runtime override rather than implying bootstrapping from config.toml.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx`:
- Around line 19-23: When loading the saved config in CustomModelSection (where
openhumanGetConfig() -> setApiUrl/setApiKey is called), validate and normalize
the API URL using isValidRpcUrl and normalizeRpcUrl from
app/src/utils/configPersistence.ts (same pattern as Welcome.tsx) before
persisting into state: import those helpers, call
normalizeRpcUrl(apiUrlCandidate) and only setApiUrl(normalized) if
isValidRpcUrl(normalized) passes, otherwise setApiUrl('') (or a safe fallback)
so malformed/whitespace/wrapped URLs are not persisted.
In `@app/src/pages/Welcome.tsx`:
- Around line 93-99: The current reachability check incorrectly treats HTTP 400
as success; update the logic in the block that calls setConnectionTested and
storeRpcUrl so it only accepts true 200 or a JSON-RPC envelope: remove
response.status === 400 from the success conditions and instead, when
response.status !== 200 and !== 405, attempt to parse await response.json() and
verify the top-level jsonrpc === "2.0" before marking success; ensure you handle
JSON parse errors (fall back to setRpcUrlError with the status/statusText) and
only call setConnectionTested(true) and storeRpcUrl(normalized) when either
status === 200, status === 405, or the parsed body contains jsonrpc === "2.0".
- Around line 44-58: handleSaveRpcUrl currently validates and stores the RPC URL
but then calls setConnectionTested(true), which incorrectly displays a
"Connection successful" message without probing the endpoint; to fix, either (A)
introduce a new state like saveSuccess (replace usages of connectionTested in
handleSaveRpcUrl with setSaveSuccess(true) and update the UI message to "URL
saved." where saveSuccess is used, keeping connectionTested reserved for actual
connectivity tests), or (B) perform an actual connectivity check after
storeRpcUrl by fetching/pinging the normalized URL and only call
setConnectionTested(true) if the probe succeeds (and set an error/state on
failure); update references to connectionTested and the UI message accordingly
so save vs test use distinct states/messages.
- Around line 37-106: The module-level cache resolvedCoreRpcUrl in coreRpcClient
causes saved RPC changes to be ignored; add a function resetCoreRpcUrlCache() in
coreRpcClient that clears the memoized resolvedCoreRpcUrl (so getCoreRpcUrl()
will re-read localStorage), then call resetCoreRpcUrlCache() from
handleSaveRpcUrl and after successful storeRpcUrl() in handleTestConnection so
the new URL takes effect immediately; reference coreRpcClient.getCoreRpcUrl /
resolvedCoreRpcUrl when implementing the reset and call
coreRpcClient.resetCoreRpcUrlCache() in handleSaveRpcUrl and
handleTestConnection after storing the URL.
In `@app/src/utils/__tests__/configPersistence.test.ts`:
- Line 1: The file contains an invalid Python/shell comment (`# Test
configuration for configPersistence module`) at the top which breaks TypeScript
parsing; edit the test file to replace that leading `#` comment with a
TypeScript/JS single-line comment (`// Test configuration for configPersistence
module`) and scan the rest of the file for any other `#`-prefixed comments to
convert them as well so the file parses and Vitest can load the tests.
In `@app/src/utils/configPersistence.ts`:
- Around line 13-14: DEFAULT_RPC_URL duplicates the canonical default from
app/src/utils/config.ts causing drift; update configPersistence to stop defining
DEFAULT_RPC_URL and instead import/reuse CORE_RPC_URL from config.ts, then make
getStoredRpcUrl() compare and return CORE_RPC_URL (or use CORE_RPC_URL as the
fallback) so the same canonical value is used everywhere; adjust any references
to DEFAULT_RPC_URL in this file to reference CORE_RPC_URL (symbols:
DEFAULT_RPC_URL, getStoredRpcUrl, CORE_RPC_URL).
- Around line 1-22: The module header overstates functionality and exports dead
code; remove the unused isTauriEnvironment() export and the local
DEFAULT_RPC_URL constant, import CORE_RPC_URL from the config module
(app/src/utils/config.ts) and re-export it (e.g., export const DEFAULT_RPC_URL =
CORE_RPC_URL), update the top docstring to state this file is localStorage-only
for web (no Tauri store branch), and keep RPC_URL_STORAGE_KEY and any
localStorage get/set usage unchanged so callers using RPC_URL_STORAGE_KEY
continue to work.
In `@scratch.js`:
- Around line 1-2: Delete the temporary scratch file (scratch.js) from the PR
and remove its unused import of callCoreRpc to avoid committing
experimental/test code; ensure no references to scratch.js remain in the
repository or CI. Additionally, add/verify pre-merge checks to run ESLint and
tsc --noEmit in the app/ package so unused imports and TypeScript import errors
are caught before merging.
In `@src/openhuman/config/ops.rs`:
- Around line 228-234: The code assigns update.api_key directly to
config.api_key without trimming, so keys with leading/trailing whitespace may be
saved; change the block handling update.api_key to trim the value first (e.g.,
compute a trimmed string from update.api_key), check if the trimmed string is
empty and set config.api_key = None when empty, otherwise store
Some(trimmed_string); update any references to api_key in this block to use the
trimmed value (identifiers: update.api_key, config.api_key).
In `@src/openhuman/config/schema/types.rs`:
- Line 32: Add a doc comment above the api_key field (the field named api_key:
Option<String>) marking it as sensitive/secret and advising not to log it or
include it in snapshots returned to remote callers; use a triple-slash doc
comment (///) immediately above the api_key declaration, briefly stating it
contains a secret API key and must not be logged, printed, or exposed in remote
snapshots.
In `@src/openhuman/config/schemas.rs`:
- Around line 12-18: The ModelSettingsUpdate struct derives Debug but contains
an api_key which must not be logged; replace the auto-derived Debug with a
custom impl for ModelSettingsUpdate that redacts api_key (e.g., show
"<redacted>" or "REDACTED" instead of the actual string) or change api_key to a
Secret<String>-style newtype and implement/redact Debug there, and apply the
same treatment to config_rpc::ModelSettingsPatch if it also derives Debug so no
secret values are ever printed via {:?}.
In `@src/openhuman/providers/ops.rs`:
- Around line 151-168: The function create_backend_inference_provider silently
falls back to OpenHumanBackendProvider when only one of api_url or api_key is
provided and emits no log about which branch was chosen; change it to detect the
partial-config case and emit a tracing::warn! (or return an error) indicating
the misconfiguration, and add a stable tracing::debug! (or info!) with a
grep-friendly prefix like "[providers] " before returning either the
OpenAiCompatibleProvider or OpenHumanBackendProvider so callers see which branch
was selected; update references in this function
(create_backend_inference_provider) and the provider constructors
(OpenAiCompatibleProvider::new and OpenHumanBackendProvider::new) usage to avoid
logging secrets (log presence of key/url only as booleans or masked) and keep
behavior otherwise unchanged.
In `@src/openhuman/routing/factory.rs`:
- Around line 69-73: The conditional assigning auth_style is dead because both
branches return AuthStyle::Bearer; replace the if expression with a direct
assignment let auth_style = AuthStyle::Bearer; or, if the intended behavior is
to permit no-auth when local_ai_config.api_key is None, add an appropriate enum
variant (e.g., AuthStyle::None or AuthStyle::NoAuth) to AuthStyle and set
auth_style = if local_ai_config.api_key.is_some() { AuthStyle::Bearer } else {
AuthStyle::NoAuth }, updating any code that consumes auth_style (in the
routing/factory logic that constructs requests) to handle the new variant.
- Around line 75-80: The Ollama branch now inadvertently forwards
local_ai_config.api_key into OpenAiCompatibleProvider::new causing Authorization
headers to be sent for Ollama; update the factory so that when constructing the
provider for the Ollama path you only pass local_ai_config.api_key.as_deref() if
use_openai_compat_local is true, otherwise pass None (keep the existing behavior
for the OpenAI-compat/local path by continuing to forward the key); locate the
call to OpenAiCompatibleProvider::new where local_base/local_ai_config.api_key
are used and add the conditional gate around the api_key argument.
---
Outside diff comments:
In `@app/src/services/coreRpcClient.ts`:
- Around line 92-105: getCoreRpcUrl currently returns the cached
resolvedCoreRpcUrl without re-reading persisted config, so after a user updates
the RPC URL the app keeps using the stale value; ensure cache invalidation by
(A) adding a resetCoreRpcUrlCache() call immediately after any settings flow
that saves a new RPC URL, and (B) optionally making getCoreRpcUrl re-check
persisted value if resolvedCoreRpcUrl exists but differs from
getStoredRpcUrl()/CORE_RPC_URL (use getStoredRpcUrl, CORE_RPC_URL, coreIsTauri,
and resolvedCoreRpcUrl to compare); implement resetCoreRpcUrlCache() to clear
resolvedCoreRpcUrl and call it from the settings save handler so subsequent
getCoreRpcUrl calls reflect the updated URL.
In `@src/openhuman/config/schemas.rs`:
- Around line 558-572: The handler handle_update_model_settings bypasses the
project's deserialize_params helper and instead parses params via
serde_json::json!(params), which clones the Map and returns raw error strings;
replace that inline parsing with
deserialize_params::<ModelSettingsUpdate>(params) to get the standard "invalid
params: …" error shape and avoid cloning, then construct the
config_rpc::ModelSettingsPatch from the deserialized ModelSettingsUpdate and
call config_rpc::load_and_apply_model_settings(patch).await? followed by
to_json(...) as before so the function's return type (ControllerFuture) and
behavior remain unchanged.
---
Nitpick comments:
In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx`:
- Around line 38-54: The setTimeout in handleSave that calls
setSaveSuccess(false) can fire after unmount or overwrite a subsequent save;
track the timer id in a ref (e.g., saveSuccessTimerRef), clear any existing
timer before creating a new setTimeout, store the new id in the ref, and add a
useEffect cleanup that clears saveSuccessTimerRef.current on unmount; update
handleSave (and the path where you setSaveSuccess(true)) to clear the ref first
and use the ref when clearing the timer instead of leaving timeouts unmanaged
(references: handleSave, openhumanUpdateModelSettings, setSaveSuccess).
In `@app/src/pages/Welcome.tsx`:
- Around line 80-106: The fetch in the connection test (the block that calls
fetch(normalized, { method: 'POST', ... })) has no timeout; wrap it with an
AbortController and a setTimeout (e.g., 5000ms) so the request is aborted if it
takes too long, clear the timeout on success, and call controller.abort() in
cleanup; catch an abort error and call setRpcUrlError('Connection timed out')
(or similar) and ensure setIsTestingConnection(false) still runs in the finally
block; update the logic around the existing response.ok / status checks and the
existing handlers setConnectionTested and storeRpcUrl to run only if not
aborted.
In `@app/src/utils/__tests__/configPersistence.test.ts`:
- Line 13: The test defines a duplicated magic string STORAGE_KEY that mirrors
the private RPC_URL_STORAGE_KEY in configPersistence.ts; instead export the
canonical constant (e.g., RPC_URL_STORAGE_KEY or a __test__ re-export) from the
module and import that constant into the test instead of defining STORAGE_KEY
locally so the test always uses the real production key (update the test to
import RPC_URL_STORAGE_KEY and remove the local STORAGE_KEY).
In `@app/src/utils/configPersistence.ts`:
- Around line 76-88: The isValidRpcUrl function currently only checks protocol
and allows URLs without a host or path; update it to validate that the parsed
URL has a non-empty hostname (parsed.hostname.length > 0) and also ensure it has
a non-empty pathname (parsed.pathname !== '' or parsed.pathname !== '/') so
endpoints like "http://x" or "http:///" are rejected; modify isValidRpcUrl
accordingly to return true only when protocol is http/https and parsed.hostname
and parsed.pathname meet these checks.
- Around line 47-60: The storeRpcUrl function currently writes directly to
localStorage using RPC_URL_STORAGE_KEY; update the flow so storeRpcUrl still
writes to localStorage for bootstrap use but after the Redux store hydrates also
dispatches an action to mirror the value into the Redux settings slice (e.g.,
dispatch an action like setRpcUrl) so consumers read from store; ensure clearing
logic (localStorage.removeItem) also dispatches the corresponding clear/reset
action, and guard the dispatch path so it only runs if the store/dispatch is
available (to preserve bootstrap behavior when store is not yet initialized).
In `@CHANGELOG.md`:
- Line 12: Update the Unreleased changelog entry that currently reads "Bootstrap
from config.toml RPC URL with runtime derivation" to a clearer phrase that
reflects the actual feature; replace that line (the "Bootstrap from config.toml
RPC URL with runtime derivation" text) with something like "Runtime-configurable
Core RPC URL (Issue `#933`) — set the URL from the Welcome screen without editing
`config.toml`" so it clearly states runtime override rather than implying
bootstrapping from config.toml.
In `@src/openhuman/config/schema/local_ai.rs`:
- Around line 12-15: Add one-line Rust doc comments above the LocalAI config
fields to document intent: mark api_key as a secret (do not log/print it and
treat it like Config::api_key) and clarify that base_url is an OpenAI-compatible
endpoint override (used instead of provider selection). Update the comments
located on the struct fields named base_url and api_key in local_ai.rs to state
those behaviors so callers know to avoid logging the API key and that base_url
overrides the endpoint.
In `@src/openhuman/routing/factory.rs`:
- Line 43: The current single-line assignment to use_openai_compat_local mixes
override_base and a long matches! check which hurts readability; refactor by
first computing a boolean like has_override = override_base.is_some() (or
similar) and a second boolean like is_local_provider =
matches!(provider_kind.as_str(), "llamacpp" | "llama-server" | "custom_openai"),
then set use_openai_compat_local = has_override || is_local_provider so the
logic is identical but the expression is split for clarity and line-length
compliance.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c4b0e3dc-17a0-481d-aff5-03053da3191d
📒 Files selected for processing (23)
CHANGELOG.mdapp/src/components/settings/panels/LocalModelDebugPanel.tsxapp/src/components/settings/panels/local-model/CustomModelSection.tsxapp/src/pages/Welcome.tsxapp/src/services/coreRpcClient.tsapp/src/utils/__tests__/configPersistence.test.tsapp/src/utils/configPersistence.tsapp/src/utils/tauriCommands/config.tsscratch.jssrc/openhuman/agent/harness/session/builder.rssrc/openhuman/agent/triage/routing.rssrc/openhuman/channels/routes.rssrc/openhuman/channels/runtime/startup.rssrc/openhuman/config/ops.rssrc/openhuman/config/schema/local_ai.rssrc/openhuman/config/schema/types.rssrc/openhuman/config/schemas.rssrc/openhuman/learning/linkedin_enrichment.rssrc/openhuman/local_ai/ops.rssrc/openhuman/providers/ops.rssrc/openhuman/routing/factory.rssrc/openhuman/threads/ops.rssrc/openhuman/tools/impl/agent/delegate.rs
| const { result } = await openhumanGetConfig(); | ||
| if (mounted) { | ||
| setApiUrl(String(result.config.api_url || '')); | ||
| setApiKey(String(result.config.api_key || '')); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm a URL validator/normalizer is exported from configPersistence and
# can be imported here.
rg -nP '\bexport\s+(const|function)\s+(validate|normalize|buildRpcEndpoint)\w*' app/src/utils/configPersistence.tsRepository: tinyhumansai/openhuman
Length of output: 188
🏁 Script executed:
# First, let's see the implementations of the exported functions
head -130 app/src/utils/configPersistence.ts | tail -50Repository: tinyhumansai/openhuman
Length of output: 1236
🏁 Script executed:
# Check if normalizeRpcUrl or buildRpcEndpoint are used in Welcome screen files
rg -n "(normalizeRpcUrl|buildRpcEndpoint)" app/src/components/welcome/ --type ts --type tsxRepository: tinyhumansai/openhuman
Length of output: 94
🏁 Script executed:
# Examine the full context of CustomModelSection.tsx around lines 19-23
head -40 app/src/components/settings/panels/local-model/CustomModelSection.tsxRepository: tinyhumansai/openhuman
Length of output: 1235
🏁 Script executed:
# Check what openhumanGetConfig returns and how apiUrl is handled
rg -n "openhumanGetConfig|setApiUrl|apiUrl" app/src/components/settings/panels/local-model/CustomModelSection.tsx -A 3 -B 1Repository: tinyhumansai/openhuman
Length of output: 1270
🏁 Script executed:
# Check Welcome screen components for validation function usage
rg -n "isValidRpcUrl|normalizeRpcUrl|buildRpcEndpoint" app/src/components/welcome/ -A 2Repository: tinyhumansai/openhuman
Length of output: 134
🏁 Script executed:
# Get a broader view of Welcome screen config handling
rg -n "configPersistence|import.*config" app/src/components/welcome/ --type tsRepository: tinyhumansai/openhuman
Length of output: 134
🏁 Script executed:
# Find welcome-related files
fd -i "welcome" app/src/ --type fRepository: tinyhumansai/openhuman
Length of output: 242
🏁 Script executed:
# Search for usage of validation functions across the codebase
rg -n "isValidRpcUrl|normalizeRpcUrl" app/src/ --type ts --type tsxRepository: tinyhumansai/openhuman
Length of output: 94
🏁 Script executed:
# Check the full handleSave method to understand the complete context
sed -n '40,60p' app/src/components/settings/panels/local-model/CustomModelSection.tsxRepository: tinyhumansai/openhuman
Length of output: 783
🏁 Script executed:
# Check the onboarding WelcomeStep for URL config handling
grep -n "isValidRpcUrl\|normalizeRpcUrl\|configPersistence" app/src/pages/onboarding/steps/WelcomeStep.tsxRepository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Look at Welcome.tsx for URL handling
grep -n "isValidRpcUrl\|normalizeRpcUrl\|configPersistence\|api_url" app/src/pages/Welcome.tsx | head -20Repository: tinyhumansai/openhuman
Length of output: 332
🏁 Script executed:
# Get full context of WelcomeStep to see how it handles RPC URL input
cat app/src/pages/onboarding/steps/WelcomeStep.tsx | head -100Repository: tinyhumansai/openhuman
Length of output: 4200
🏁 Script executed:
# See the full context of Welcome.tsx to understand how it uses these validators
sed -n '40,80p' app/src/pages/Welcome.tsxRepository: tinyhumansai/openhuman
Length of output: 1027
🏁 Script executed:
# Check if there's a form or input field for RPC URL in Welcome.tsx
rg -n "input|rpcUrl|api_url" app/src/pages/Welcome.tsx -B 2 -A 2 | head -40Repository: tinyhumansai/openhuman
Length of output: 1785
Validate API URL before persisting, matching the Welcome screen pattern.
apiUrl.trim() is forwarded without validation. Invalid URLs like malformed protocol, whitespace, wrapper characters, or typos will persist into config.toml and break provider calls. Import isValidRpcUrl and normalizeRpcUrl from app/src/utils/configPersistence.ts—Welcome.tsx already uses this pattern—and apply validation before saving.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx` around
lines 19 - 23, When loading the saved config in CustomModelSection (where
openhumanGetConfig() -> setApiUrl/setApiKey is called), validate and normalize
the API URL using isValidRpcUrl and normalizeRpcUrl from
app/src/utils/configPersistence.ts (same pattern as Welcome.tsx) before
persisting into state: import those helpers, call
normalizeRpcUrl(apiUrlCandidate) and only setApiUrl(normalized) if
isValidRpcUrl(normalized) passes, otherwise setApiUrl('') (or a safe fallback)
so malformed/whitespace/wrapped URLs are not persisted.
| if (response.ok || response.status === 400 || response.status === 405) { | ||
| // 400/405 means the endpoint exists but method not found - still valid | ||
| setConnectionTested(true); | ||
| storeRpcUrl(normalized); | ||
| } else { | ||
| setRpcUrlError(`Connection failed: ${response.status} ${response.statusText}`); | ||
| } |
There was a problem hiding this comment.
Treating HTTP 400 as a successful reachability signal is too permissive.
response.status === 400 is broad — many proxies, captive portals, or unrelated services return 400 for a POST with a JSON body, which would falsely confirm a non-OpenHuman endpoint as valid. A JSON-RPC server should respond 200 (with a JSON-RPC error envelope) to an unknown method like openhuman.ping; 405 is reasonable (method-not-allowed for GET-only servers). Consider dropping 400 from the success set, or instead parsing the response body and accepting it only when it looks like a JSON-RPC envelope (jsonrpc: "2.0").
♻️ Suggested fix
- if (response.ok || response.status === 400 || response.status === 405) {
- // 400/405 means the endpoint exists but method not found - still valid
+ if (response.ok || response.status === 405) {
+ // 405 means the endpoint exists but rejects POST shape - still likely valid
setConnectionTested(true);
storeRpcUrl(normalized);
} else {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (response.ok || response.status === 400 || response.status === 405) { | |
| // 400/405 means the endpoint exists but method not found - still valid | |
| setConnectionTested(true); | |
| storeRpcUrl(normalized); | |
| } else { | |
| setRpcUrlError(`Connection failed: ${response.status} ${response.statusText}`); | |
| } | |
| if (response.ok || response.status === 405) { | |
| // 405 means the endpoint exists but rejects POST shape - still likely valid | |
| setConnectionTested(true); | |
| storeRpcUrl(normalized); | |
| } else { | |
| setRpcUrlError(`Connection failed: ${response.status} ${response.statusText}`); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/pages/Welcome.tsx` around lines 93 - 99, The current reachability
check incorrectly treats HTTP 400 as success; update the logic in the block that
calls setConnectionTested and storeRpcUrl so it only accepts true 200 or a
JSON-RPC envelope: remove response.status === 400 from the success conditions
and instead, when response.status !== 200 and !== 405, attempt to parse await
response.json() and verify the top-level jsonrpc === "2.0" before marking
success; ensure you handle JSON parse errors (fall back to setRpcUrlError with
the status/statusText) and only call setConnectionTested(true) and
storeRpcUrl(normalized) when either status === 200, status === 405, or the
parsed body contains jsonrpc === "2.0".
| #[serde(skip)] | ||
| pub config_path: PathBuf, | ||
| pub api_url: Option<String>, | ||
| pub api_key: Option<String>, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm whether OPENHUMAN_API_KEY env override (and any redaction logic
# for api_key in snapshots / logs) already exists alongside the existing
# OPENHUMAN_API_URL plumbing.
fd -e rs . src/openhuman/config | xargs rg -nP '\bOPENHUMAN_API_(URL|KEY)\b|\bapi_key\b' -C2Repository: tinyhumansai/openhuman
Length of output: 4142
🏁 Script executed:
#!/bin/bash
# Search for config load.rs or similar and env var override handling
fd -e rs . src/openhuman/config | head -20Repository: tinyhumansai/openhuman
Length of output: 839
🏁 Script executed:
#!/bin/bash
# Search for OPENHUMAN_ env vars more broadly (without word boundaries)
rg 'OPENHUMAN_' src/openhuman/config/ --type rs -C2Repository: tinyhumansai/openhuman
Length of output: 93
🏁 Script executed:
#!/bin/bash
# Look for where OPENHUMAN_API_URL is being loaded/set
rg 'api_url|API_URL' src/openhuman/config/ --type rs -C2 | head -100Repository: tinyhumansai/openhuman
Length of output: 93
🏁 Script executed:
#!/bin/bash
# Examine load.rs where config loading likely happens
cat -n src/openhuman/config/schema/load.rsRepository: tinyhumansai/openhuman
Length of output: 50378
🏁 Script executed:
#!/bin/bash
# Search for env var handling in config files - broader search
rg 'env::var|std::env|OPENHUMAN|api_key|api_url' src/openhuman/config/Repository: tinyhumansai/openhuman
Length of output: 26871
🏁 Script executed:
#!/bin/bash
# Specifically search for OPENHUMAN_API_URL (not just api_url field)
rg 'OPENHUMAN_API_URL|OPENHUMAN_API_KEY' src/Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Check the exact location where api_url field is in types.rs and verify no doc comments
cat -n src/openhuman/config/schema/types.rs | grep -A 5 -B 5 'pub api_url\|pub api_key'Repository: tinyhumansai/openhuman
Length of output: 476
Add a doc comment to api_key documenting it as a secret field.
The api_key field lacks documentation. Add a comment marking it as sensitive and advising against logging or including it in snapshots returned to remote callers:
pub api_url: Option<String>,
+ /// Optional API key for the upstream backend / OpenAI-compatible provider.
+ /// **Secret** — never log this value or include it in snapshots returned to
+ /// remote callers.
pub api_key: Option<String>,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/config/schema/types.rs` at line 32, Add a doc comment above the
api_key field (the field named api_key: Option<String>) marking it as
sensitive/secret and advising not to log it or include it in snapshots returned
to remote callers; use a triple-slash doc comment (///) immediately above the
api_key declaration, briefly stating it contains a secret API key and must not
be logged, printed, or exposed in remote snapshots.
| pub fn create_backend_inference_provider( | ||
| api_url: Option<&str>, | ||
| api_key: Option<&str>, | ||
| options: &ProviderRuntimeOptions, | ||
| ) -> anyhow::Result<Box<dyn Provider>> { | ||
| Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new( | ||
| api_url, options, | ||
| ))) | ||
| if let (Some(url), Some(key)) = (api_url, api_key) { | ||
| Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new( | ||
| "custom_openai", | ||
| url, | ||
| Some(key), | ||
| crate::openhuman::providers::compatible::AuthStyle::Bearer, | ||
| ))) | ||
| } else { | ||
| Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new( | ||
| api_url, options, | ||
| ))) | ||
| } | ||
| } |
There was a problem hiding this comment.
Silent ignore when only one of api_url / api_key is provided + missing branch logging.
Two related concerns:
- If a caller passes
api_key=Some(...)butapi_url=None, the key is silently dropped and the function returns the default backend provider. That's likely a misconfiguration the operator wants to know about. Consider warning/erroring instead of silently falling through. - Per the repo's logging conventions, this branch (custom OpenAI-compatible vs. backend) should emit a
tracing::debug!(orinfo!) with a stable prefix so it's grep-able.
♻️ Suggested fix
pub fn create_backend_inference_provider(
api_url: Option<&str>,
api_key: Option<&str>,
options: &ProviderRuntimeOptions,
) -> anyhow::Result<Box<dyn Provider>> {
- if let (Some(url), Some(key)) = (api_url, api_key) {
+ if api_key.is_some() && api_url.is_none() {
+ tracing::warn!(
+ "[providers] api_key configured without api_url; falling back to OpenHuman backend and ignoring key"
+ );
+ }
+ if let (Some(url), Some(key)) = (api_url, api_key) {
+ tracing::debug!(target: "providers", "[providers] using OpenAI-compatible custom backend");
Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new(
"custom_openai",
url,
Some(key),
crate::openhuman::providers::compatible::AuthStyle::Bearer,
)))
} else {
+ tracing::debug!(target: "providers", "[providers] using OpenHuman backend provider");
Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
api_url, options,
)))
}
}As per coding guidelines: "Log entry/exit, branches, external calls, retries/timeouts, state transitions, errors with stable grep-friendly prefixes ([domain], [rpc], [ui-flow]) and correlation fields. Never log secrets or full PII."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn create_backend_inference_provider( | |
| api_url: Option<&str>, | |
| api_key: Option<&str>, | |
| options: &ProviderRuntimeOptions, | |
| ) -> anyhow::Result<Box<dyn Provider>> { | |
| Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new( | |
| api_url, options, | |
| ))) | |
| if let (Some(url), Some(key)) = (api_url, api_key) { | |
| Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new( | |
| "custom_openai", | |
| url, | |
| Some(key), | |
| crate::openhuman::providers::compatible::AuthStyle::Bearer, | |
| ))) | |
| } else { | |
| Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new( | |
| api_url, options, | |
| ))) | |
| } | |
| } | |
| pub fn create_backend_inference_provider( | |
| api_url: Option<&str>, | |
| api_key: Option<&str>, | |
| options: &ProviderRuntimeOptions, | |
| ) -> anyhow::Result<Box<dyn Provider>> { | |
| if api_key.is_some() && api_url.is_none() { | |
| tracing::warn!( | |
| "[providers] api_key configured without api_url; falling back to OpenHuman backend and ignoring key" | |
| ); | |
| } | |
| if let (Some(url), Some(key)) = (api_url, api_key) { | |
| tracing::debug!(target: "providers", "[providers] using OpenAI-compatible custom backend"); | |
| Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new( | |
| "custom_openai", | |
| url, | |
| Some(key), | |
| crate::openhuman::providers::compatible::AuthStyle::Bearer, | |
| ))) | |
| } else { | |
| tracing::debug!(target: "providers", "[providers] using OpenHuman backend provider"); | |
| Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new( | |
| api_url, options, | |
| ))) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/providers/ops.rs` around lines 151 - 168, The function
create_backend_inference_provider silently falls back to
OpenHumanBackendProvider when only one of api_url or api_key is provided and
emits no log about which branch was chosen; change it to detect the
partial-config case and emit a tracing::warn! (or return an error) indicating
the misconfiguration, and add a stable tracing::debug! (or info!) with a
grep-friendly prefix like "[providers] " before returning either the
OpenAiCompatibleProvider or OpenHumanBackendProvider so callers see which branch
was selected; update references in this function
(create_backend_inference_provider) and the provider constructors
(OpenAiCompatibleProvider::new and OpenHumanBackendProvider::new) usage to avoid
logging secrets (log presence of key/url only as booleans or masked) and keep
behavior otherwise unchanged.
| let auth_style = if local_ai_config.api_key.is_some() { | ||
| AuthStyle::Bearer | ||
| } else { | ||
| AuthStyle::Bearer | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Dead conditional — both branches return AuthStyle::Bearer.
let auth_style = if local_ai_config.api_key.is_some() {
AuthStyle::Bearer
} else {
AuthStyle::Bearer
};This is equivalent to let auth_style = AuthStyle::Bearer;. Either remove the conditional or, if the intent is to support a no-auth style when api_key is None, introduce that variant explicitly.
♻️ Suggested simplification
- let auth_style = if local_ai_config.api_key.is_some() {
- AuthStyle::Bearer
- } else {
- AuthStyle::Bearer
- };
-
let local: Box<dyn Provider> = Box::new(OpenAiCompatibleProvider::new(
provider_label,
&local_base,
local_ai_config.api_key.as_deref(), // local servers do not require authentication, but custom endpoints might
- auth_style,
+ AuthStyle::Bearer,
));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/routing/factory.rs` around lines 69 - 73, The conditional
assigning auth_style is dead because both branches return AuthStyle::Bearer;
replace the if expression with a direct assignment let auth_style =
AuthStyle::Bearer; or, if the intended behavior is to permit no-auth when
local_ai_config.api_key is None, add an appropriate enum variant (e.g.,
AuthStyle::None or AuthStyle::NoAuth) to AuthStyle and set auth_style = if
local_ai_config.api_key.is_some() { AuthStyle::Bearer } else { AuthStyle::NoAuth
}, updating any code that consumes auth_style (in the routing/factory logic that
constructs requests) to handle the new variant.
| let local: Box<dyn Provider> = Box::new(OpenAiCompatibleProvider::new( | ||
| provider_label, | ||
| &local_base, | ||
| None, // local servers do not require authentication | ||
| AuthStyle::Bearer, | ||
| local_ai_config.api_key.as_deref(), // local servers do not require authentication, but custom endpoints might | ||
| auth_style, | ||
| )); |
There was a problem hiding this comment.
Ollama branch now also forwards local_ai_config.api_key — verify intent.
The provider construction at lines 75–80 is shared between the OpenAI-compat branch and the Ollama branch. Previously the Ollama path always passed None; now it forwards local_ai_config.api_key.as_deref(), which means an Authorization: Bearer … header will be attached to Ollama requests when the user configured a key. If the key is intended only for custom_openai/llama-server, gate it on use_openai_compat_local.
♻️ Suggested gating
+ let local_api_key = if use_openai_compat_local {
+ local_ai_config.api_key.as_deref()
+ } else {
+ None
+ };
let local: Box<dyn Provider> = Box::new(OpenAiCompatibleProvider::new(
provider_label,
&local_base,
- local_ai_config.api_key.as_deref(),
+ local_api_key,
AuthStyle::Bearer,
));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/routing/factory.rs` around lines 75 - 80, The Ollama branch now
inadvertently forwards local_ai_config.api_key into
OpenAiCompatibleProvider::new causing Authorization headers to be sent for
Ollama; update the factory so that when constructing the provider for the Ollama
path you only pass local_ai_config.api_key.as_deref() if use_openai_compat_local
is true, otherwise pass None (keep the existing behavior for the
OpenAI-compat/local path by continuing to forward the key); locate the call to
OpenAiCompatibleProvider::new where local_base/local_ai_config.api_key are used
and add the conditional gate around the api_key argument.
… conflict resolution
Removed the upload step for Rust debug symbols to Sentry from the build workflow.
- Fix typo: update.hotoken → update.hotkey in config/schemas.rs - Remove unused vi import from configPersistence.test.ts - Delete scratch.js (leftover debug file) - Remove dead auth_style if/else branch in routing/factory.rs - Add clearCoreRpcUrlCache() to coreRpcClient.ts and wire into Welcome.tsx All checks pass: typecheck, lint, format, unit tests (985 passed), cargo check (tauri shell clean; core errors are pre-existing).
- configPersistence.ts: import CORE_RPC_URL from config.ts instead of duplicating the hardcoded default to prevent drift - Welcome.tsx: rename connectionTested -> saveSuccess to distinguish save-only from tested-connection feedback; narrow HTTP 400 acceptance to 405 only (more precise JSON-RPC reachability signal) - config/ops.rs: trim api_key before persisting so pasted keys with leading/trailing whitespace don't cause auth failures - providers/ops.rs: log warning when api_key is provided without api_url so the silent drop is visible in diagnostics
There was a problem hiding this comment.
♻️ Duplicate comments (1)
app/src/pages/Welcome.tsx (1)
71-105:⚠️ Potential issue | 🟡 MinorUse separate feedback for save vs. test.
handleTestConnectionstill flipssaveSuccess, so a failed retry can leave the stale “URL saved successfully.” banner visible from the previous save. Split the save/test outcomes into distinct state, or clear the save banner before testing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Welcome.tsx` around lines 71 - 105, handleTestConnection currently sets setSaveSuccess(true) when the endpoint responds, which lets a previous successful save banner persist after a failed retry; change the behavior so testing and saving are distinct: either clear the saved-success banner at the start of testing (call setSaveSuccess(false) before network call) or introduce a separate test state (e.g., setTestSuccess) and only call setSaveSuccess(true) from the save flow (storeRpcUrl), while handleTestConnection only updates test-specific state (and still uses setRpcUrlError/setIsTestingConnection as now).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@app/src/pages/Welcome.tsx`:
- Around line 71-105: handleTestConnection currently sets setSaveSuccess(true)
when the endpoint responds, which lets a previous successful save banner persist
after a failed retry; change the behavior so testing and saving are distinct:
either clear the saved-success banner at the start of testing (call
setSaveSuccess(false) before network call) or introduce a separate test state
(e.g., setTestSuccess) and only call setSaveSuccess(true) from the save flow
(storeRpcUrl), while handleTestConnection only updates test-specific state (and
still uses setRpcUrlError/setIsTestingConnection as now).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7ece002a-3414-4986-b212-3be59e85ce75
📒 Files selected for processing (12)
.github/workflows/build.ymlCHANGELOG.mdapp/src/components/settings/panels/LocalModelDebugPanel.tsxapp/src/components/settings/panels/local-model/CustomModelSection.tsxapp/src/pages/Welcome.tsxapp/src/services/coreRpcClient.tsapp/src/utils/__tests__/configPersistence.test.tsapp/src/utils/configPersistence.tsscripts/upload_sentry_symbols.shsrc/openhuman/agent/triage/routing.rssrc/openhuman/channels/routes.rssrc/openhuman/channels/runtime/startup.rs
✅ Files skipped from review due to trivial changes (3)
- .github/workflows/build.yml
- app/src/components/settings/panels/LocalModelDebugPanel.tsx
- scripts/upload_sentry_symbols.sh
🚧 Files skipped from review as they are similar to previous changes (7)
- src/openhuman/channels/routes.rs
- src/openhuman/agent/triage/routing.rs
- src/openhuman/channels/runtime/startup.rs
- CHANGELOG.md
- app/src/components/settings/panels/local-model/CustomModelSection.tsx
- app/src/utils/configPersistence.ts
- app/src/services/coreRpcClient.ts
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
src/openhuman/config/schemas.rs (2)
590-591: Rename local variable to camelCase for repo consistency.Line 590 uses
app_version; this repo rule requires camelCase variable names.As per coding guidelines: `src/openhuman/**/*.rs`: “Use camelCase for variable names”.Suggested rename
- let app_version = + let appVersion = std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string()); @@ - "app_version": app_version, + "app_version": appVersion,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/config/schemas.rs` around lines 590 - 591, Rename the local variable app_version to camelCase (e.g., appVersion) to follow repo naming conventions: change the let binding that calls std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string()) to use appVersion and update all subsequent references in the same scope (functions/struct initializers) to the new name so compilation succeeds and style is consistent.
587-601: Add[rpc]debug breadcrumbs for the new handler path.This new RPC flow has no entry/exit diagnostics, which makes tracing failures harder.
As per coding guidelines: `src/openhuman/**/*.rs` must log entry/exit and key flow points with stable prefixes (e.g., `[rpc]`) and use `log`/`tracing` at `debug`/`trace` level.Suggested logging additions
fn handle_get_client_config(_params: Map<String, Value>) -> ControllerFuture { Box::pin(async move { + log::debug!("[rpc][config.get_client_config] start"); let config = config_rpc::load_config_with_timeout().await?; let app_version = std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string()); - to_json(RpcOutcome::new( + let outcome = RpcOutcome::new( serde_json::json!({ "api_url": config.api_url, "default_model": config.default_model, "app_version": app_version, }), vec!["client config read".to_string()], - )) + ); + log::debug!("[rpc][config.get_client_config] success"); + to_json(outcome) }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/config/schemas.rs` around lines 587 - 601, Add structured entry/exit debug breadcrumbs to the new RPC handler by logging at the start and before return using the stable "[rpc]" prefix. Inside handle_get_client_config, emit a debug (or trace) log like "[rpc] handle_get_client_config entry" immediately on entry, log the result of config_rpc::load_config_with_timeout (or app_version) at trace/debug after it returns (e.g., "[rpc] loaded client config: {api_url, default_model, app_version}"), and emit an exit breadcrumb like "[rpc] handle_get_client_config exit" just before calling to_json; use the project's log/tracing macros (log::debug! or tracing::debug!/trace!) so diagnostics are consistent with other src/openhuman handlers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/composio/periodic.rs`:
- Around line 278-300: The test currently sets OPENHUMAN_WORKSPACE unsafely and
unconditionally removes it at the end, which leaks state on panic and clobbers
any prior value; capture the prior value with std::env::var_os before changing
it, set OPENHUMAN_WORKSPACE to the tempdir, and create an RAII restore guard (a
small struct with the captured Option<OsString> and a Drop impl) that on Drop
restores the original value (set_var if Some, remove_var if None); remove the
unsafe blocks around set_var/remove_var and ensure the guard lives for the
duration of the test so run_one_tick and ENV_LOCK remain unchanged.
In `@src/openhuman/config/schemas.rs`:
- Line 237: The schema description string currently claims it returns “feature
flags” but the actual response only includes api_url, default_model, and
app_version; update the description text in the schema declaration to accurately
list those fields (e.g., "Read safe client-facing config fields: api_url,
default_model, app_version. No secrets.") so it matches the output; locate and
edit the description literal in the OpenAPI/schema variable that defines this
response (the string containing "Read safe client-facing config fields (api_url,
feature flags). No secrets.") and replace it with the corrected wording
referencing api_url, default_model, and app_version.
- Around line 587-601: Add an E2E JSON-RPC test that exercises the registered
method "config.get_client_config" and verifies the response fields from
handle_get_client_config: ensure the returned JSON contains "api_url",
"default_model", and "app_version" with expected values; add the test into
tests/json_rpc_e2e.rs, send a JSON-RPC request for the
"config.get_client_config" method, await the response, and assert the presence
and types/values of those keys (set OPENHUMAN_APP_VERSION in the test
environment if you need a deterministic app_version); this ensures the handler
(handle_get_client_config) is covered end-to-end.
---
Nitpick comments:
In `@src/openhuman/config/schemas.rs`:
- Around line 590-591: Rename the local variable app_version to camelCase (e.g.,
appVersion) to follow repo naming conventions: change the let binding that calls
std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string())
to use appVersion and update all subsequent references in the same scope
(functions/struct initializers) to the new name so compilation succeeds and
style is consistent.
- Around line 587-601: Add structured entry/exit debug breadcrumbs to the new
RPC handler by logging at the start and before return using the stable "[rpc]"
prefix. Inside handle_get_client_config, emit a debug (or trace) log like "[rpc]
handle_get_client_config entry" immediately on entry, log the result of
config_rpc::load_config_with_timeout (or app_version) at trace/debug after it
returns (e.g., "[rpc] loaded client config: {api_url, default_model,
app_version}"), and emit an exit breadcrumb like "[rpc] handle_get_client_config
exit" just before calling to_json; use the project's log/tracing macros
(log::debug! or tracing::debug!/trace!) so diagnostics are consistent with other
src/openhuman handlers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c9ea73e3-3b3e-473e-b1cc-fa7ac28b36f1
📒 Files selected for processing (5)
src/openhuman/agent/harness/session/builder.rssrc/openhuman/composio/periodic.rssrc/openhuman/config/ops_tests.rssrc/openhuman/config/schemas.rssrc/openhuman/threads/ops.rs
✅ Files skipped from review due to trivial changes (1)
- src/openhuman/config/ops_tests.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/openhuman/threads/ops.rs
- src/openhuman/agent/harness/session/builder.rs
| // Isolate the workspace/env so config loading doesn't contend with | ||
| // sibling tests mutating OPENHUMAN_WORKSPACE in parallel. | ||
| let _guard = ENV_LOCK.lock().expect("env lock"); | ||
| let tmp = tempdir().expect("tempdir"); | ||
| unsafe { | ||
| std::env::set_var("OPENHUMAN_WORKSPACE", tmp.path()); | ||
| } | ||
|
|
||
| // With no session stored in the isolated workspace, | ||
| // `build_composio_client` returns None and the tick should | ||
| // silently skip (returning Ok). This covers the early-return | ||
| // path that's otherwise only hit in production. | ||
| let inner = tokio::time::timeout(Duration::from_secs(5), run_one_tick()) | ||
| .await | ||
| .expect("run_one_tick should not hang indefinitely during tests"); | ||
| assert!( | ||
| inner.is_ok(), | ||
| "run_one_tick should return Ok when no client is available: {inner:?}" | ||
| ); | ||
|
|
||
| unsafe { | ||
| std::env::remove_var("OPENHUMAN_WORKSPACE"); | ||
| } |
There was a problem hiding this comment.
Make OPENHUMAN_WORKSPACE restoration panic-safe and value-preserving.
Line 299 always removes the variable instead of restoring any prior value, and cleanup is skipped if the test panics before the final block. This can leak state into later tests.
🔧 Suggested fix
#[tokio::test]
async fn run_one_tick_returns_ok_when_no_client() {
// Isolate the workspace/env so config loading doesn't contend with
// sibling tests mutating OPENHUMAN_WORKSPACE in parallel.
let _guard = ENV_LOCK.lock().expect("env lock");
let tmp = tempdir().expect("tempdir");
+ let previousWorkspace = std::env::var_os("OPENHUMAN_WORKSPACE");
+ struct WorkspaceEnvRestore(Option<std::ffi::OsString>);
+ impl Drop for WorkspaceEnvRestore {
+ fn drop(&mut self) {
+ unsafe {
+ match self.0.take() {
+ Some(value) => std::env::set_var("OPENHUMAN_WORKSPACE", value),
+ None => std::env::remove_var("OPENHUMAN_WORKSPACE"),
+ }
+ }
+ }
+ }
unsafe {
std::env::set_var("OPENHUMAN_WORKSPACE", tmp.path());
}
+ let _workspaceEnvRestore = WorkspaceEnvRestore(previousWorkspace);
// With no session stored in the isolated workspace,
// `build_composio_client` returns None and the tick should
// silently skip (returning Ok). This covers the early-return
// path that's otherwise only hit in production.
@@
assert!(
inner.is_ok(),
"run_one_tick should return Ok when no client is available: {inner:?}"
);
-
- unsafe {
- std::env::remove_var("OPENHUMAN_WORKSPACE");
- }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/composio/periodic.rs` around lines 278 - 300, The test
currently sets OPENHUMAN_WORKSPACE unsafely and unconditionally removes it at
the end, which leaks state on panic and clobbers any prior value; capture the
prior value with std::env::var_os before changing it, set OPENHUMAN_WORKSPACE to
the tempdir, and create an RAII restore guard (a small struct with the captured
Option<OsString> and a Drop impl) that on Drop restores the original value
(set_var if Some, remove_var if None); remove the unsafe blocks around
set_var/remove_var and ensure the guard lives for the duration of the test so
run_one_tick and ENV_LOCK remain unchanged.
| "get_client_config" => ControllerSchema { | ||
| namespace: "config", | ||
| function: "get_client_config", | ||
| description: "Read safe client-facing config fields (api_url, feature flags). No secrets.", |
There was a problem hiding this comment.
Schema description overstates the contract.
Line 237 says “feature flags”, but outputs only define api_url, default_model, and app_version. Please align description text with actual response fields.
Suggested text fix
- description: "Read safe client-facing config fields (api_url, feature flags). No secrets.",
+ description: "Read safe client-facing config fields (api_url, default_model, app_version). No secrets.",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| description: "Read safe client-facing config fields (api_url, feature flags). No secrets.", | |
| description: "Read safe client-facing config fields (api_url, default_model, app_version). No secrets.", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/config/schemas.rs` at line 237, The schema description string
currently claims it returns “feature flags” but the actual response only
includes api_url, default_model, and app_version; update the description text in
the schema declaration to accurately list those fields (e.g., "Read safe
client-facing config fields: api_url, default_model, app_version. No secrets.")
so it matches the output; locate and edit the description literal in the
OpenAPI/schema variable that defines this response (the string containing "Read
safe client-facing config fields (api_url, feature flags). No secrets.") and
replace it with the corrected wording referencing api_url, default_model, and
app_version.
| fn handle_get_client_config(_params: Map<String, Value>) -> ControllerFuture { | ||
| Box::pin(async move { | ||
| let config = config_rpc::load_config_with_timeout().await?; | ||
| let app_version = | ||
| std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string()); | ||
| to_json(RpcOutcome::new( | ||
| serde_json::json!({ | ||
| "api_url": config.api_url, | ||
| "default_model": config.default_model, | ||
| "app_version": app_version, | ||
| }), | ||
| vec!["client config read".to_string()], | ||
| )) | ||
| }) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Locating json_rpc_e2e.rs ..."
E2E_FILE="$(fd -i '^json_rpc_e2e\.rs$' tests src | head -n1 || true)"
if [[ -z "${E2E_FILE}" ]]; then
echo "ERROR: json_rpc_e2e.rs not found under tests/ or src/"
exit 1
fi
echo "Found: ${E2E_FILE}"
echo
echo "Checking for get_client_config coverage..."
rg -n -C3 'get_client_config|config_get_client_config|openhuman\.config_get_client_config' "${E2E_FILE}" || true
echo
echo "Method registered in schemas.rs:"
rg -n -C2 '"get_client_config"|namespace:\s*"config"|function:\s*"get_client_config"' src/openhuman/config/schemas.rsRepository: tinyhumansai/openhuman
Length of output: 6171
Add JSON-RPC E2E test coverage for config.get_client_config.
The endpoint is properly registered in src/openhuman/config/schemas.rs (lines 103–238) and implemented as handle_get_client_config, but it has no test coverage in tests/json_rpc_e2e.rs. According to the feature design workflow, JSON-RPC E2E tests must be in place before UI implementation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/config/schemas.rs` around lines 587 - 601, Add an E2E JSON-RPC
test that exercises the registered method "config.get_client_config" and
verifies the response fields from handle_get_client_config: ensure the returned
JSON contains "api_url", "default_model", and "app_version" with expected
values; add the test into tests/json_rpc_e2e.rs, send a JSON-RPC request for the
"config.get_client_config" method, await the response, and assert the presence
and types/values of those keys (set OPENHUMAN_APP_VERSION in the test
environment if you need a deterministic app_version); this ensures the handler
(handle_get_client_config) is covered end-to-end.
Summary
Implements runtime RPC URL configuration for the Tauri desktop app as specified in issue #933.
Changes Made
Login Screen RPC URL Configuration (
app/src/pages/Welcome.tsx)RPC URL Persistence (
app/src/utils/configPersistence.ts)coreRpcClient Integration (
app/src/services/coreRpcClient.ts)getCoreRpcUrl()to respect user-configured URL over build-time defaultsVITE_OPENHUMAN_CORE_RPC_URLBackend Schema (
src/openhuman/config/schemas.rs)get_client_configRPC method returning safe client fields (api_url, default_model, app_version)Unit Tests (
app/src/utils/__tests__/configPersistence.test.ts)Acceptance Criteria Met
config.tomluser edit requiredapi_url(no secrets in payload)app/for URL persistence + config mergeRelated Files
app/src/pages/Welcome.tsx- Login screen with RPC URL configurationapp/src/utils/configPersistence.ts- URL persistence utilitiesapp/src/services/coreRpcClient.ts- RPC client with user-config supportapp/src/utils/__tests__/configPersistence.test.ts- Unit testssrc/openhuman/config/schemas.rs- Backend RPC schemaTesting
Notes
VITE_*variables for Sentry, etc. remain as build-time constantscore_rpc_urlcommand in lib.rs already respectsOPENHUMAN_CORE_RPC_URLenv varSummary by CodeRabbit
New Features
Tests
Bug Fixes