Skip to content

feat(config): runtime RPC URL configuration bootstrap (Issue #933)#948

Merged
senamakel merged 15 commits intotinyhumansai:mainfrom
unn-Known1:fix/issue-933-rpc-bootstrap
Apr 29, 2026
Merged

feat(config): runtime RPC URL configuration bootstrap (Issue #933)#948
senamakel merged 15 commits intotinyhumansai:mainfrom
unn-Known1:fix/issue-933-rpc-bootstrap

Conversation

@unn-Known1
Copy link
Copy Markdown
Contributor

@unn-Known1 unn-Known1 commented Apr 26, 2026

Summary

Implements runtime RPC URL configuration for the Tauri desktop app as specified in issue #933.

Changes Made

  1. Login Screen RPC URL Configuration (app/src/pages/Welcome.tsx)

    • Added "Configure RPC URL" option on the Welcome screen
    • Users can now set core JSON-RPC URL on the login screen without build-time configuration
    • Includes "Test Connection" button to validate the RPC endpoint before saving
    • Persists last-used URL to localStorage and restores on next launch
  2. RPC URL Persistence (app/src/utils/configPersistence.ts)

    • New utility module for storing/retrieving RPC URL preferences
    • Uses localStorage (web) or Tauri store (desktop) for persistence
    • Provides URL validation, normalization, and buildRpcEndpoint helper
  3. coreRpcClient Integration (app/src/services/coreRpcClient.ts)

    • Updated getCoreRpcUrl() to respect user-configured URL over build-time defaults
    • Checks stored preference first, falls back to VITE_OPENHUMAN_CORE_RPC_URL
  4. Backend Schema (src/openhuman/config/schemas.rs)

    • Added get_client_config RPC method returning safe client fields (api_url, default_model, app_version)
    • No secrets exposed in this payload
  5. Unit Tests (app/src/utils/__tests__/configPersistence.test.ts)

    • Comprehensive tests for URL storage, validation, normalization, and defaults

Acceptance Criteria Met

  • User can set the RPC URL on the login screen; no config.toml user edit required
  • Last-used URL is persisted and restored on next launch (unless cleared)
  • After successful RPC connect, app loads backend/API settings from core
  • Core is authoritative for api_url (no secrets in payload)
  • Unit tests in app/ for URL persistence + config merge

Related Files

  • app/src/pages/Welcome.tsx - Login screen with RPC URL configuration
  • app/src/utils/configPersistence.ts - URL persistence utilities
  • app/src/services/coreRpcClient.ts - RPC client with user-config support
  • app/src/utils/__tests__/configPersistence.test.ts - Unit tests
  • src/openhuman/config/schemas.rs - Backend RPC schema

Testing

cd app && npm test -- --run configPersistence

Notes

  • The VITE_* variables for Sentry, etc. remain as build-time constants
  • Only truly runtime-configurable values (RPC URL, API endpoint) now come from the core
  • For Tauri builds, core_rpc_url command in lib.rs already respects OPENHUMAN_CORE_RPC_URL env var

Summary by CodeRabbit

  • New Features

    • Runtime-configurable Core RPC URL via Welcome/login UI with an advanced config panel and Test Connection action; persists and restores across startups.
    • Custom model backend panel to set Base URL and API key; backends may use API-key authentication.
  • Tests

    • Unit tests added for RPC URL persistence, normalization, validation, and related utilities.
  • Bug Fixes

    • Minor formatting fixes in changelog and scripts.

@unn-Known1 unn-Known1 requested a review from a team April 26, 2026 18:49
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

📝 Walkthrough

Walkthrough

Adds runtime-configurable Core JSON‑RPC URL and API key: UI to test and persist a user-provided core RPC URL, client RPC to return safe config fields, persistence/validation utilities, coreRpcClient preference for stored URL with cache clearing, and threading of api_key through provider factories and config persistence.

Changes

Cohort / File(s) Summary
Frontend: Welcome & Settings UI
app/src/pages/Welcome.tsx, app/src/components/settings/panels/LocalModelDebugPanel.tsx, app/src/components/settings/panels/local-model/CustomModelSection.tsx
Adds advanced config UI on Welcome for Core RPC URL with normalization/validation, test connection, persistence and success feedback; new CustomModelSection for Base URL + API Key; LocalModelDebugPanel renders the new section.
Config persistence & utils
app/src/utils/configPersistence.ts, app/src/utils/__tests__/configPersistence.test.ts
New utilities for detecting environment, storing/clearing/getting stored RPC URL, validating and normalizing URLs, building RPC endpoint; comprehensive unit tests for edge cases.
Core RPC client
app/src/services/coreRpcClient.ts
getCoreRpcUrl() now prefers the stored user URL and falls back appropriately in Tauri/web; added clearCoreRpcUrlCache() to invalidate in-memory resolution.
Tauri command types
app/src/utils/tauriCommands/config.ts
ModelSettingsUpdate extended to include optional `api_key?: string
Backend config schema & ops
src/openhuman/config/schema/types.rs, src/openhuman/config/schema/local_ai.rs, src/openhuman/config/ops.rs, src/openhuman/config/schemas.rs
Adds api_key to Config and LocalAiConfig; ModelSettingsPatch accepts api_key and apply_model_settings persists/normalizes it; new controller handler returns client-safe config fields.
Provider factory signatures
src/openhuman/providers/ops.rs
Extended multiple provider factory signatures to accept api_key: Option<&str> and conditionally create OpenAI-compatible backend when URL+key present.
Provider callsites
src/openhuman/agent/.../builder.rs, src/openhuman/agent/triage/routing.rs, src/openhuman/channels/.../routes.rs, src/openhuman/channels/runtime/startup.rs, src/openhuman/learning/linkedin_enrichment.rs, src/openhuman/local_ai/ops.rs, src/openhuman/threads/ops.rs, src/openhuman/tools/impl/agent/delegate.rs
Updated call sites to pass api_key from config into provider creation functions.
Local routing factory
src/openhuman/routing/factory.rs
Adds custom_openai local provider support; resolves base URL via env/config/default; passes local api_key and sets auth style to bearer.
Misc: CI, scripts, changelog
.github/workflows/build.yml, scripts/upload_sentry_symbols.sh, CHANGELOG.md
Fixed YAML quoting, ensured trailing newline in script, and added changelog entry documenting runtime-configurable Core RPC URL flow.
Tests & minor Rust adjustments
src/openhuman/composio/periodic.rs, src/openhuman/config/ops_tests.rs, other small updates
Test isolation and explicit api_key handling in tests; minor wiring updates in several Rust modules to forward API key.

Sequence Diagram

sequenceDiagram
    participant User as User
    participant Welcome as Welcome UI
    participant Utils as Config Utils (normalize/isValid)
    participant Storage as localStorage
    participant RPC as Core RPC (openhuman.ping)
    participant Provider as Provider Factory

    User->>Welcome: Open Advanced Config
    Welcome->>Storage: getStoredRpcUrl()
    Storage-->>Welcome: stored or default URL
    User->>Welcome: Enter URL -> Test
    Welcome->>Utils: normalizeRpcUrl()
    Utils-->>Welcome: normalized URL
    Welcome->>Utils: isValidRpcUrl()
    Utils-->>Welcome: valid/invalid
    alt valid
        Welcome->>RPC: POST openhuman.ping (to endpoint)
        RPC-->>Welcome: 200-299 or 405 -> success
        Welcome->>Storage: storeRpcUrl()
        Storage-->>Storage: persist URL
        Welcome->>Provider: clearCoreRpcUrlCache() -> provider will use stored URL
        Provider-->>Welcome: provider ready (uses API key if present)
    else invalid
        Welcome-->>User: show error
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • senamakel

"🐰 I hopped in with a config bright and new,
URLs and keys saved clean in local view.
I tested the ping and stored it with delight,
Providers now sip creds and run just right. ✨"

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main feature: runtime RPC URL configuration for the core JSON-RPC connection, covering the primary objective of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 60 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/openhuman/config/schemas.rs (1)

558-572: 🛠️ Refactor suggestion | 🟠 Major

Use the existing deserialize_params helper for consistent error shape.

Every other handle_* in this file routes params through deserialize_params::<T>(), which (1) avoids cloning the Map via json!(params) and (2) produces the conventional "invalid params: …" prefix that tests pin (deserialize_params_rejects_wrong_types_with_invalid_params_prefix). The new inline path returns a bare e.to_string(), which silently breaks that contract for update_model_settings only.

 fn handle_update_model_settings(params: Map<String, Value>) -> ControllerFuture {
     Box::pin(async move {
-        let update: ModelSettingsUpdate = match serde_json::from_value(serde_json::json!(params)) {
-            Ok(u) => u,
-            Err(e) => return Err(e.to_string()),
-        };
+        let update = deserialize_params::<ModelSettingsUpdate>(params)?;
         let patch = config_rpc::ModelSettingsPatch {
             api_url: update.api_url,
             api_key: update.api_key,
             default_model: update.default_model,
             default_temperature: update.default_temperature,
         };
         to_json(config_rpc::load_and_apply_model_settings(patch).await?)
     })
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schemas.rs` around lines 558 - 572, The handler
handle_update_model_settings bypasses the project's deserialize_params helper
and instead parses params via serde_json::json!(params), which clones the Map
and returns raw error strings; replace that inline parsing with
deserialize_params::<ModelSettingsUpdate>(params) to get the standard "invalid
params: …" error shape and avoid cloning, then construct the
config_rpc::ModelSettingsPatch from the deserialized ModelSettingsUpdate and
call config_rpc::load_and_apply_model_settings(patch).await? followed by
to_json(...) as before so the function's return type (ControllerFuture) and
behavior remain unchanged.
app/src/services/coreRpcClient.ts (1)

92-105: ⚠️ Potential issue | 🟠 Major

Cached RPC URL can become stale after user reconfiguration.

On Line 92, getCoreRpcUrl() returns resolvedCoreRpcUrl without re-checking persisted config. After a user changes RPC URL at runtime, calls may keep hitting the old endpoint until full app reload.

💡 Proposed fix (cache invalidation + refresh path)
 let resolvedCoreRpcUrl: string | null = null;
 let resolvingCoreRpcUrl: Promise<string> | null = null;
+
+export function resetCoreRpcUrlCache(): void {
+  resolvedCoreRpcUrl = null;
+  resolvingCoreRpcUrl = null;
+}
 
 export async function getCoreRpcUrl(): Promise<string> {
+  const storedUrl = getStoredRpcUrl();
+  if (storedUrl && storedUrl !== resolvedCoreRpcUrl) {
+    resolvedCoreRpcUrl = storedUrl;
+    return storedUrl;
+  }
+
   if (resolvedCoreRpcUrl) {
     return resolvedCoreRpcUrl;
   }

Then call resetCoreRpcUrlCache() right after saving a new RPC URL in the settings flow.

Also applies to: 111-129

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/services/coreRpcClient.ts` around lines 92 - 105, getCoreRpcUrl
currently returns the cached resolvedCoreRpcUrl without re-reading persisted
config, so after a user updates the RPC URL the app keeps using the stale value;
ensure cache invalidation by (A) adding a resetCoreRpcUrlCache() call
immediately after any settings flow that saves a new RPC URL, and (B) optionally
making getCoreRpcUrl re-check persisted value if resolvedCoreRpcUrl exists but
differs from getStoredRpcUrl()/CORE_RPC_URL (use getStoredRpcUrl, CORE_RPC_URL,
coreIsTauri, and resolvedCoreRpcUrl to compare); implement
resetCoreRpcUrlCache() to clear resolvedCoreRpcUrl and call it from the settings
save handler so subsequent getCoreRpcUrl calls reflect the updated URL.
🧹 Nitpick comments (8)
app/src/components/settings/panels/local-model/CustomModelSection.tsx (1)

38-54: Clear the success-banner setTimeout on unmount / re-save.

If the user navigates away (or saves again) within the 3s window, setSaveSuccess(false) runs against an unmounted component (or stomps a fresh save). Track the timer in a ref and clear it.

♻️ Proposed fix
-import { useEffect, useState } from 'react';
+import { useEffect, useRef, useState } from 'react';
 ...
   const [saveSuccess, setSaveSuccess] = useState(false);
+  const successTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null);
+
+  useEffect(() => {
+    return () => {
+      if (successTimerRef.current) clearTimeout(successTimerRef.current);
+    };
+  }, []);
 ...
   const handleSave = async () => {
     setIsSaving(true);
     setError('');
     setSaveSuccess(false);
+    if (successTimerRef.current) clearTimeout(successTimerRef.current);
     try {
       await openhumanUpdateModelSettings({
         api_url: apiUrl.trim() || null,
         api_key: apiKey.trim() || null,
       });
       setSaveSuccess(true);
-      setTimeout(() => setSaveSuccess(false), 3000);
+      successTimerRef.current = setTimeout(() => setSaveSuccess(false), 3000);
     } catch (err) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx` around
lines 38 - 54, The setTimeout in handleSave that calls setSaveSuccess(false) can
fire after unmount or overwrite a subsequent save; track the timer id in a ref
(e.g., saveSuccessTimerRef), clear any existing timer before creating a new
setTimeout, store the new id in the ref, and add a useEffect cleanup that clears
saveSuccessTimerRef.current on unmount; update handleSave (and the path where
you setSaveSuccess(true)) to clear the ref first and use the ref when clearing
the timer instead of leaving timeouts unmanaged (references: handleSave,
openhumanUpdateModelSettings, setSaveSuccess).
src/openhuman/config/schema/local_ai.rs (1)

12-15: Document api_key as a secret (and clarify what base_url overrides).

Both fields are sparsely typed (Option<String>); a one-liner doc comment helps callers avoid logging the API key and clarifies that base_url is the OpenAI-compatible endpoint override (vs. the provider selector). Same secret-handling concern as Config::api_key.

-    #[serde(default)]
-    pub base_url: Option<String>,
-    #[serde(default)]
-    pub api_key: Option<String>,
+    /// Override base URL for an OpenAI-compatible local backend (e.g.
+    /// vLLM/LiteLLM). Ignored when the configured `provider` does not
+    /// understand a base URL.
+    #[serde(default)]
+    pub base_url: Option<String>,
+    /// Optional API key for the configured local-AI backend.
+    /// **Secret** — never log this value.
+    #[serde(default)]
+    pub api_key: Option<String>,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schema/local_ai.rs` around lines 12 - 15, Add one-line
Rust doc comments above the LocalAI config fields to document intent: mark
api_key as a secret (do not log/print it and treat it like Config::api_key) and
clarify that base_url is an OpenAI-compatible endpoint override (used instead of
provider selection). Update the comments located on the struct fields named
base_url and api_key in local_ai.rs to state those behaviors so callers know to
avoid logging the API key and that base_url overrides the endpoint.
app/src/utils/__tests__/configPersistence.test.ts (1)

13-13: Duplicated magic-string storage key.

STORAGE_KEY here mirrors the private RPC_URL_STORAGE_KEY in app/src/utils/configPersistence.ts. If the production constant ever changes, these tests will silently keep passing while testing the wrong key. Consider exporting the key from the module (e.g., RPC_URL_STORAGE_KEY or via a __test__ re-export) and importing it here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/utils/__tests__/configPersistence.test.ts` at line 13, The test
defines a duplicated magic string STORAGE_KEY that mirrors the private
RPC_URL_STORAGE_KEY in configPersistence.ts; instead export the canonical
constant (e.g., RPC_URL_STORAGE_KEY or a __test__ re-export) from the module and
import that constant into the test instead of defining STORAGE_KEY locally so
the test always uses the real production key (update the test to import
RPC_URL_STORAGE_KEY and remove the local STORAGE_KEY).
app/src/pages/Welcome.tsx (1)

80-106: No timeout on the test fetch — UI can hang indefinitely.

If the user enters an unreachable host that just stalls (DNS blackhole, dropped SYN), this fetch has no AbortController and will sit in "Testing" until the OS times out (often 60–120s). Wrap the request with an AbortController + setTimeout (e.g., 5s) and surface a clean "Connection timed out" error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/pages/Welcome.tsx` around lines 80 - 106, The fetch in the connection
test (the block that calls fetch(normalized, { method: 'POST', ... })) has no
timeout; wrap it with an AbortController and a setTimeout (e.g., 5000ms) so the
request is aborted if it takes too long, clear the timeout on success, and call
controller.abort() in cleanup; catch an abort error and call
setRpcUrlError('Connection timed out') (or similar) and ensure
setIsTestingConnection(false) still runs in the finally block; update the logic
around the existing response.ok / status checks and the existing handlers
setConnectionTested and storeRpcUrl to run only if not aborted.
app/src/utils/configPersistence.ts (2)

76-88: isValidRpcUrl accepts URLs that the rest of the flow rejects.

new URL('http://') throws (invalid), but new URL('http://x') is accepted as valid even though it has no path and may still 404 against /rpc. More importantly, isValidRpcUrl doesn't enforce that the URL has a host, which means strings like http:///rpc (parsed by some engines) can slip through. Consider also requiring parsed.hostname.length > 0.

   try {
     const parsed = new URL(url);
-    // Must be http or https
-    return parsed.protocol === 'http:' || parsed.protocol === 'https:';
+    return (
+      (parsed.protocol === 'http:' || parsed.protocol === 'https:') &&
+      parsed.hostname.length > 0
+    );
   } catch {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/utils/configPersistence.ts` around lines 76 - 88, The isValidRpcUrl
function currently only checks protocol and allows URLs without a host or path;
update it to validate that the parsed URL has a non-empty hostname
(parsed.hostname.length > 0) and also ensure it has a non-empty pathname
(parsed.pathname !== '' or parsed.pathname !== '/') so endpoints like "http://x"
or "http:///" are rejected; modify isValidRpcUrl accordingly to return true only
when protocol is http/https and parsed.hostname and parsed.pathname meet these
checks.

47-60: Consider Redux-persist for app state instead of ad-hoc localStorage.

The repository's coding guideline prefers Redux + persist for app state over ad-hoc localStorage. The RPC URL is technically bootstrap config (read before the store is ready), which is a defensible reason to stay on raw localStorage here, but consider mirroring it into the Redux slice once the store hydrates so consumers (e.g., a future settings page) read from a single source of truth.

As per coding guidelines: "Prefer Redux and persist where configured for app state over ad hoc localStorage for app state".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/utils/configPersistence.ts` around lines 47 - 60, The storeRpcUrl
function currently writes directly to localStorage using RPC_URL_STORAGE_KEY;
update the flow so storeRpcUrl still writes to localStorage for bootstrap use
but after the Redux store hydrates also dispatches an action to mirror the value
into the Redux settings slice (e.g., dispatch an action like setRpcUrl) so
consumers read from store; ensure clearing logic (localStorage.removeItem) also
dispatches the corresponding clear/reset action, and guard the dispatch path so
it only runs if the store/dispatch is available (to preserve bootstrap behavior
when store is not yet initialized).
src/openhuman/routing/factory.rs (1)

43-43: Line length / readability nit.

This single-line matches! chain runs ~140 columns and mixes the env-var override boolean with the provider-kind list. Splitting it is easier to read and also matches existing rustfmt defaults in this repo.

-    let use_openai_compat_local = override_base.is_some() || matches!(provider_kind.as_str(), "llamacpp" | "llama-server" | "custom_openai");
+    let use_openai_compat_local = override_base.is_some()
+        || matches!(
+            provider_kind.as_str(),
+            "llamacpp" | "llama-server" | "custom_openai"
+        );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/routing/factory.rs` at line 43, The current single-line
assignment to use_openai_compat_local mixes override_base and a long matches!
check which hurts readability; refactor by first computing a boolean like
has_override = override_base.is_some() (or similar) and a second boolean like
is_local_provider = matches!(provider_kind.as_str(), "llamacpp" | "llama-server"
| "custom_openai"), then set use_openai_compat_local = has_override ||
is_local_provider so the logic is identical but the expression is split for
clarity and line-length compliance.
CHANGELOG.md (1)

12-12: Phrasing nit on the Unreleased entry.

"Bootstrap from config.toml RPC URL with runtime derivation" reads ambiguously — the actual feature is the opposite direction (allow runtime override of the RPC URL without editing config.toml). Consider rewording, e.g., "Runtime-configurable Core RPC URL (Issue #933) — set the URL from the Welcome screen without editing config.toml."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` at line 12, Update the Unreleased changelog entry that
currently reads "Bootstrap from config.toml RPC URL with runtime derivation" to
a clearer phrase that reflects the actual feature; replace that line (the
"Bootstrap from config.toml RPC URL with runtime derivation" text) with
something like "Runtime-configurable Core RPC URL (Issue `#933`) — set the URL
from the Welcome screen without editing `config.toml`" so it clearly states
runtime override rather than implying bootstrapping from config.toml.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx`:
- Around line 19-23: When loading the saved config in CustomModelSection (where
openhumanGetConfig() -> setApiUrl/setApiKey is called), validate and normalize
the API URL using isValidRpcUrl and normalizeRpcUrl from
app/src/utils/configPersistence.ts (same pattern as Welcome.tsx) before
persisting into state: import those helpers, call
normalizeRpcUrl(apiUrlCandidate) and only setApiUrl(normalized) if
isValidRpcUrl(normalized) passes, otherwise setApiUrl('') (or a safe fallback)
so malformed/whitespace/wrapped URLs are not persisted.

In `@app/src/pages/Welcome.tsx`:
- Around line 93-99: The current reachability check incorrectly treats HTTP 400
as success; update the logic in the block that calls setConnectionTested and
storeRpcUrl so it only accepts true 200 or a JSON-RPC envelope: remove
response.status === 400 from the success conditions and instead, when
response.status !== 200 and !== 405, attempt to parse await response.json() and
verify the top-level jsonrpc === "2.0" before marking success; ensure you handle
JSON parse errors (fall back to setRpcUrlError with the status/statusText) and
only call setConnectionTested(true) and storeRpcUrl(normalized) when either
status === 200, status === 405, or the parsed body contains jsonrpc === "2.0".
- Around line 44-58: handleSaveRpcUrl currently validates and stores the RPC URL
but then calls setConnectionTested(true), which incorrectly displays a
"Connection successful" message without probing the endpoint; to fix, either (A)
introduce a new state like saveSuccess (replace usages of connectionTested in
handleSaveRpcUrl with setSaveSuccess(true) and update the UI message to "URL
saved." where saveSuccess is used, keeping connectionTested reserved for actual
connectivity tests), or (B) perform an actual connectivity check after
storeRpcUrl by fetching/pinging the normalized URL and only call
setConnectionTested(true) if the probe succeeds (and set an error/state on
failure); update references to connectionTested and the UI message accordingly
so save vs test use distinct states/messages.
- Around line 37-106: The module-level cache resolvedCoreRpcUrl in coreRpcClient
causes saved RPC changes to be ignored; add a function resetCoreRpcUrlCache() in
coreRpcClient that clears the memoized resolvedCoreRpcUrl (so getCoreRpcUrl()
will re-read localStorage), then call resetCoreRpcUrlCache() from
handleSaveRpcUrl and after successful storeRpcUrl() in handleTestConnection so
the new URL takes effect immediately; reference coreRpcClient.getCoreRpcUrl /
resolvedCoreRpcUrl when implementing the reset and call
coreRpcClient.resetCoreRpcUrlCache() in handleSaveRpcUrl and
handleTestConnection after storing the URL.

In `@app/src/utils/__tests__/configPersistence.test.ts`:
- Line 1: The file contains an invalid Python/shell comment (`# Test
configuration for configPersistence module`) at the top which breaks TypeScript
parsing; edit the test file to replace that leading `#` comment with a
TypeScript/JS single-line comment (`// Test configuration for configPersistence
module`) and scan the rest of the file for any other `#`-prefixed comments to
convert them as well so the file parses and Vitest can load the tests.

In `@app/src/utils/configPersistence.ts`:
- Around line 13-14: DEFAULT_RPC_URL duplicates the canonical default from
app/src/utils/config.ts causing drift; update configPersistence to stop defining
DEFAULT_RPC_URL and instead import/reuse CORE_RPC_URL from config.ts, then make
getStoredRpcUrl() compare and return CORE_RPC_URL (or use CORE_RPC_URL as the
fallback) so the same canonical value is used everywhere; adjust any references
to DEFAULT_RPC_URL in this file to reference CORE_RPC_URL (symbols:
DEFAULT_RPC_URL, getStoredRpcUrl, CORE_RPC_URL).
- Around line 1-22: The module header overstates functionality and exports dead
code; remove the unused isTauriEnvironment() export and the local
DEFAULT_RPC_URL constant, import CORE_RPC_URL from the config module
(app/src/utils/config.ts) and re-export it (e.g., export const DEFAULT_RPC_URL =
CORE_RPC_URL), update the top docstring to state this file is localStorage-only
for web (no Tauri store branch), and keep RPC_URL_STORAGE_KEY and any
localStorage get/set usage unchanged so callers using RPC_URL_STORAGE_KEY
continue to work.

In `@scratch.js`:
- Around line 1-2: Delete the temporary scratch file (scratch.js) from the PR
and remove its unused import of callCoreRpc to avoid committing
experimental/test code; ensure no references to scratch.js remain in the
repository or CI. Additionally, add/verify pre-merge checks to run ESLint and
tsc --noEmit in the app/ package so unused imports and TypeScript import errors
are caught before merging.

In `@src/openhuman/config/ops.rs`:
- Around line 228-234: The code assigns update.api_key directly to
config.api_key without trimming, so keys with leading/trailing whitespace may be
saved; change the block handling update.api_key to trim the value first (e.g.,
compute a trimmed string from update.api_key), check if the trimmed string is
empty and set config.api_key = None when empty, otherwise store
Some(trimmed_string); update any references to api_key in this block to use the
trimmed value (identifiers: update.api_key, config.api_key).

In `@src/openhuman/config/schema/types.rs`:
- Line 32: Add a doc comment above the api_key field (the field named api_key:
Option<String>) marking it as sensitive/secret and advising not to log it or
include it in snapshots returned to remote callers; use a triple-slash doc
comment (///) immediately above the api_key declaration, briefly stating it
contains a secret API key and must not be logged, printed, or exposed in remote
snapshots.

In `@src/openhuman/config/schemas.rs`:
- Around line 12-18: The ModelSettingsUpdate struct derives Debug but contains
an api_key which must not be logged; replace the auto-derived Debug with a
custom impl for ModelSettingsUpdate that redacts api_key (e.g., show
"<redacted>" or "REDACTED" instead of the actual string) or change api_key to a
Secret<String>-style newtype and implement/redact Debug there, and apply the
same treatment to config_rpc::ModelSettingsPatch if it also derives Debug so no
secret values are ever printed via {:?}.

In `@src/openhuman/providers/ops.rs`:
- Around line 151-168: The function create_backend_inference_provider silently
falls back to OpenHumanBackendProvider when only one of api_url or api_key is
provided and emits no log about which branch was chosen; change it to detect the
partial-config case and emit a tracing::warn! (or return an error) indicating
the misconfiguration, and add a stable tracing::debug! (or info!) with a
grep-friendly prefix like "[providers] " before returning either the
OpenAiCompatibleProvider or OpenHumanBackendProvider so callers see which branch
was selected; update references in this function
(create_backend_inference_provider) and the provider constructors
(OpenAiCompatibleProvider::new and OpenHumanBackendProvider::new) usage to avoid
logging secrets (log presence of key/url only as booleans or masked) and keep
behavior otherwise unchanged.

In `@src/openhuman/routing/factory.rs`:
- Around line 69-73: The conditional assigning auth_style is dead because both
branches return AuthStyle::Bearer; replace the if expression with a direct
assignment let auth_style = AuthStyle::Bearer; or, if the intended behavior is
to permit no-auth when local_ai_config.api_key is None, add an appropriate enum
variant (e.g., AuthStyle::None or AuthStyle::NoAuth) to AuthStyle and set
auth_style = if local_ai_config.api_key.is_some() { AuthStyle::Bearer } else {
AuthStyle::NoAuth }, updating any code that consumes auth_style (in the
routing/factory logic that constructs requests) to handle the new variant.
- Around line 75-80: The Ollama branch now inadvertently forwards
local_ai_config.api_key into OpenAiCompatibleProvider::new causing Authorization
headers to be sent for Ollama; update the factory so that when constructing the
provider for the Ollama path you only pass local_ai_config.api_key.as_deref() if
use_openai_compat_local is true, otherwise pass None (keep the existing behavior
for the OpenAI-compat/local path by continuing to forward the key); locate the
call to OpenAiCompatibleProvider::new where local_base/local_ai_config.api_key
are used and add the conditional gate around the api_key argument.

---

Outside diff comments:
In `@app/src/services/coreRpcClient.ts`:
- Around line 92-105: getCoreRpcUrl currently returns the cached
resolvedCoreRpcUrl without re-reading persisted config, so after a user updates
the RPC URL the app keeps using the stale value; ensure cache invalidation by
(A) adding a resetCoreRpcUrlCache() call immediately after any settings flow
that saves a new RPC URL, and (B) optionally making getCoreRpcUrl re-check
persisted value if resolvedCoreRpcUrl exists but differs from
getStoredRpcUrl()/CORE_RPC_URL (use getStoredRpcUrl, CORE_RPC_URL, coreIsTauri,
and resolvedCoreRpcUrl to compare); implement resetCoreRpcUrlCache() to clear
resolvedCoreRpcUrl and call it from the settings save handler so subsequent
getCoreRpcUrl calls reflect the updated URL.

In `@src/openhuman/config/schemas.rs`:
- Around line 558-572: The handler handle_update_model_settings bypasses the
project's deserialize_params helper and instead parses params via
serde_json::json!(params), which clones the Map and returns raw error strings;
replace that inline parsing with
deserialize_params::<ModelSettingsUpdate>(params) to get the standard "invalid
params: …" error shape and avoid cloning, then construct the
config_rpc::ModelSettingsPatch from the deserialized ModelSettingsUpdate and
call config_rpc::load_and_apply_model_settings(patch).await? followed by
to_json(...) as before so the function's return type (ControllerFuture) and
behavior remain unchanged.

---

Nitpick comments:
In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx`:
- Around line 38-54: The setTimeout in handleSave that calls
setSaveSuccess(false) can fire after unmount or overwrite a subsequent save;
track the timer id in a ref (e.g., saveSuccessTimerRef), clear any existing
timer before creating a new setTimeout, store the new id in the ref, and add a
useEffect cleanup that clears saveSuccessTimerRef.current on unmount; update
handleSave (and the path where you setSaveSuccess(true)) to clear the ref first
and use the ref when clearing the timer instead of leaving timeouts unmanaged
(references: handleSave, openhumanUpdateModelSettings, setSaveSuccess).

In `@app/src/pages/Welcome.tsx`:
- Around line 80-106: The fetch in the connection test (the block that calls
fetch(normalized, { method: 'POST', ... })) has no timeout; wrap it with an
AbortController and a setTimeout (e.g., 5000ms) so the request is aborted if it
takes too long, clear the timeout on success, and call controller.abort() in
cleanup; catch an abort error and call setRpcUrlError('Connection timed out')
(or similar) and ensure setIsTestingConnection(false) still runs in the finally
block; update the logic around the existing response.ok / status checks and the
existing handlers setConnectionTested and storeRpcUrl to run only if not
aborted.

In `@app/src/utils/__tests__/configPersistence.test.ts`:
- Line 13: The test defines a duplicated magic string STORAGE_KEY that mirrors
the private RPC_URL_STORAGE_KEY in configPersistence.ts; instead export the
canonical constant (e.g., RPC_URL_STORAGE_KEY or a __test__ re-export) from the
module and import that constant into the test instead of defining STORAGE_KEY
locally so the test always uses the real production key (update the test to
import RPC_URL_STORAGE_KEY and remove the local STORAGE_KEY).

In `@app/src/utils/configPersistence.ts`:
- Around line 76-88: The isValidRpcUrl function currently only checks protocol
and allows URLs without a host or path; update it to validate that the parsed
URL has a non-empty hostname (parsed.hostname.length > 0) and also ensure it has
a non-empty pathname (parsed.pathname !== '' or parsed.pathname !== '/') so
endpoints like "http://x" or "http:///" are rejected; modify isValidRpcUrl
accordingly to return true only when protocol is http/https and parsed.hostname
and parsed.pathname meet these checks.
- Around line 47-60: The storeRpcUrl function currently writes directly to
localStorage using RPC_URL_STORAGE_KEY; update the flow so storeRpcUrl still
writes to localStorage for bootstrap use but after the Redux store hydrates also
dispatches an action to mirror the value into the Redux settings slice (e.g.,
dispatch an action like setRpcUrl) so consumers read from store; ensure clearing
logic (localStorage.removeItem) also dispatches the corresponding clear/reset
action, and guard the dispatch path so it only runs if the store/dispatch is
available (to preserve bootstrap behavior when store is not yet initialized).

In `@CHANGELOG.md`:
- Line 12: Update the Unreleased changelog entry that currently reads "Bootstrap
from config.toml RPC URL with runtime derivation" to a clearer phrase that
reflects the actual feature; replace that line (the "Bootstrap from config.toml
RPC URL with runtime derivation" text) with something like "Runtime-configurable
Core RPC URL (Issue `#933`) — set the URL from the Welcome screen without editing
`config.toml`" so it clearly states runtime override rather than implying
bootstrapping from config.toml.

In `@src/openhuman/config/schema/local_ai.rs`:
- Around line 12-15: Add one-line Rust doc comments above the LocalAI config
fields to document intent: mark api_key as a secret (do not log/print it and
treat it like Config::api_key) and clarify that base_url is an OpenAI-compatible
endpoint override (used instead of provider selection). Update the comments
located on the struct fields named base_url and api_key in local_ai.rs to state
those behaviors so callers know to avoid logging the API key and that base_url
overrides the endpoint.

In `@src/openhuman/routing/factory.rs`:
- Line 43: The current single-line assignment to use_openai_compat_local mixes
override_base and a long matches! check which hurts readability; refactor by
first computing a boolean like has_override = override_base.is_some() (or
similar) and a second boolean like is_local_provider =
matches!(provider_kind.as_str(), "llamacpp" | "llama-server" | "custom_openai"),
then set use_openai_compat_local = has_override || is_local_provider so the
logic is identical but the expression is split for clarity and line-length
compliance.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c4b0e3dc-17a0-481d-aff5-03053da3191d

📥 Commits

Reviewing files that changed from the base of the PR and between 2e4f192 and 8abd08c.

📒 Files selected for processing (23)
  • CHANGELOG.md
  • app/src/components/settings/panels/LocalModelDebugPanel.tsx
  • app/src/components/settings/panels/local-model/CustomModelSection.tsx
  • app/src/pages/Welcome.tsx
  • app/src/services/coreRpcClient.ts
  • app/src/utils/__tests__/configPersistence.test.ts
  • app/src/utils/configPersistence.ts
  • app/src/utils/tauriCommands/config.ts
  • scratch.js
  • src/openhuman/agent/harness/session/builder.rs
  • src/openhuman/agent/triage/routing.rs
  • src/openhuman/channels/routes.rs
  • src/openhuman/channels/runtime/startup.rs
  • src/openhuman/config/ops.rs
  • src/openhuman/config/schema/local_ai.rs
  • src/openhuman/config/schema/types.rs
  • src/openhuman/config/schemas.rs
  • src/openhuman/learning/linkedin_enrichment.rs
  • src/openhuman/local_ai/ops.rs
  • src/openhuman/providers/ops.rs
  • src/openhuman/routing/factory.rs
  • src/openhuman/threads/ops.rs
  • src/openhuman/tools/impl/agent/delegate.rs

Comment on lines +19 to +23
const { result } = await openhumanGetConfig();
if (mounted) {
setApiUrl(String(result.config.api_url || ''));
setApiKey(String(result.config.api_key || ''));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm a URL validator/normalizer is exported from configPersistence and
# can be imported here.
rg -nP '\bexport\s+(const|function)\s+(validate|normalize|buildRpcEndpoint)\w*' app/src/utils/configPersistence.ts

Repository: tinyhumansai/openhuman

Length of output: 188


🏁 Script executed:

# First, let's see the implementations of the exported functions
head -130 app/src/utils/configPersistence.ts | tail -50

Repository: tinyhumansai/openhuman

Length of output: 1236


🏁 Script executed:

# Check if normalizeRpcUrl or buildRpcEndpoint are used in Welcome screen files
rg -n "(normalizeRpcUrl|buildRpcEndpoint)" app/src/components/welcome/ --type ts --type tsx

Repository: tinyhumansai/openhuman

Length of output: 94


🏁 Script executed:

# Examine the full context of CustomModelSection.tsx around lines 19-23
head -40 app/src/components/settings/panels/local-model/CustomModelSection.tsx

Repository: tinyhumansai/openhuman

Length of output: 1235


🏁 Script executed:

# Check what openhumanGetConfig returns and how apiUrl is handled
rg -n "openhumanGetConfig|setApiUrl|apiUrl" app/src/components/settings/panels/local-model/CustomModelSection.tsx -A 3 -B 1

Repository: tinyhumansai/openhuman

Length of output: 1270


🏁 Script executed:

# Check Welcome screen components for validation function usage
rg -n "isValidRpcUrl|normalizeRpcUrl|buildRpcEndpoint" app/src/components/welcome/ -A 2

Repository: tinyhumansai/openhuman

Length of output: 134


🏁 Script executed:

# Get a broader view of Welcome screen config handling
rg -n "configPersistence|import.*config" app/src/components/welcome/ --type ts

Repository: tinyhumansai/openhuman

Length of output: 134


🏁 Script executed:

# Find welcome-related files
fd -i "welcome" app/src/ --type f

Repository: tinyhumansai/openhuman

Length of output: 242


🏁 Script executed:

# Search for usage of validation functions across the codebase
rg -n "isValidRpcUrl|normalizeRpcUrl" app/src/ --type ts --type tsx

Repository: tinyhumansai/openhuman

Length of output: 94


🏁 Script executed:

# Check the full handleSave method to understand the complete context
sed -n '40,60p' app/src/components/settings/panels/local-model/CustomModelSection.tsx

Repository: tinyhumansai/openhuman

Length of output: 783


🏁 Script executed:

# Check the onboarding WelcomeStep for URL config handling
grep -n "isValidRpcUrl\|normalizeRpcUrl\|configPersistence" app/src/pages/onboarding/steps/WelcomeStep.tsx

Repository: tinyhumansai/openhuman

Length of output: 48


🏁 Script executed:

# Look at Welcome.tsx for URL handling
grep -n "isValidRpcUrl\|normalizeRpcUrl\|configPersistence\|api_url" app/src/pages/Welcome.tsx | head -20

Repository: tinyhumansai/openhuman

Length of output: 332


🏁 Script executed:

# Get full context of WelcomeStep to see how it handles RPC URL input
cat app/src/pages/onboarding/steps/WelcomeStep.tsx | head -100

Repository: tinyhumansai/openhuman

Length of output: 4200


🏁 Script executed:

# See the full context of Welcome.tsx to understand how it uses these validators
sed -n '40,80p' app/src/pages/Welcome.tsx

Repository: tinyhumansai/openhuman

Length of output: 1027


🏁 Script executed:

# Check if there's a form or input field for RPC URL in Welcome.tsx
rg -n "input|rpcUrl|api_url" app/src/pages/Welcome.tsx -B 2 -A 2 | head -40

Repository: tinyhumansai/openhuman

Length of output: 1785


Validate API URL before persisting, matching the Welcome screen pattern.

apiUrl.trim() is forwarded without validation. Invalid URLs like malformed protocol, whitespace, wrapper characters, or typos will persist into config.toml and break provider calls. Import isValidRpcUrl and normalizeRpcUrl from app/src/utils/configPersistence.ts—Welcome.tsx already uses this pattern—and apply validation before saving.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/components/settings/panels/local-model/CustomModelSection.tsx` around
lines 19 - 23, When loading the saved config in CustomModelSection (where
openhumanGetConfig() -> setApiUrl/setApiKey is called), validate and normalize
the API URL using isValidRpcUrl and normalizeRpcUrl from
app/src/utils/configPersistence.ts (same pattern as Welcome.tsx) before
persisting into state: import those helpers, call
normalizeRpcUrl(apiUrlCandidate) and only setApiUrl(normalized) if
isValidRpcUrl(normalized) passes, otherwise setApiUrl('') (or a safe fallback)
so malformed/whitespace/wrapped URLs are not persisted.

Comment thread app/src/pages/Welcome.tsx
Comment thread app/src/pages/Welcome.tsx
Comment thread app/src/pages/Welcome.tsx Outdated
Comment on lines +93 to +99
if (response.ok || response.status === 400 || response.status === 405) {
// 400/405 means the endpoint exists but method not found - still valid
setConnectionTested(true);
storeRpcUrl(normalized);
} else {
setRpcUrlError(`Connection failed: ${response.status} ${response.statusText}`);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Treating HTTP 400 as a successful reachability signal is too permissive.

response.status === 400 is broad — many proxies, captive portals, or unrelated services return 400 for a POST with a JSON body, which would falsely confirm a non-OpenHuman endpoint as valid. A JSON-RPC server should respond 200 (with a JSON-RPC error envelope) to an unknown method like openhuman.ping; 405 is reasonable (method-not-allowed for GET-only servers). Consider dropping 400 from the success set, or instead parsing the response body and accepting it only when it looks like a JSON-RPC envelope (jsonrpc: "2.0").

♻️ Suggested fix
-      if (response.ok || response.status === 400 || response.status === 405) {
-        // 400/405 means the endpoint exists but method not found - still valid
+      if (response.ok || response.status === 405) {
+        // 405 means the endpoint exists but rejects POST shape - still likely valid
         setConnectionTested(true);
         storeRpcUrl(normalized);
       } else {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (response.ok || response.status === 400 || response.status === 405) {
// 400/405 means the endpoint exists but method not found - still valid
setConnectionTested(true);
storeRpcUrl(normalized);
} else {
setRpcUrlError(`Connection failed: ${response.status} ${response.statusText}`);
}
if (response.ok || response.status === 405) {
// 405 means the endpoint exists but rejects POST shape - still likely valid
setConnectionTested(true);
storeRpcUrl(normalized);
} else {
setRpcUrlError(`Connection failed: ${response.status} ${response.statusText}`);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/pages/Welcome.tsx` around lines 93 - 99, The current reachability
check incorrectly treats HTTP 400 as success; update the logic in the block that
calls setConnectionTested and storeRpcUrl so it only accepts true 200 or a
JSON-RPC envelope: remove response.status === 400 from the success conditions
and instead, when response.status !== 200 and !== 405, attempt to parse await
response.json() and verify the top-level jsonrpc === "2.0" before marking
success; ensure you handle JSON parse errors (fall back to setRpcUrlError with
the status/statusText) and only call setConnectionTested(true) and
storeRpcUrl(normalized) when either status === 200, status === 405, or the
parsed body contains jsonrpc === "2.0".

Comment thread app/src/utils/__tests__/configPersistence.test.ts Outdated
#[serde(skip)]
pub config_path: PathBuf,
pub api_url: Option<String>,
pub api_key: Option<String>,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm whether OPENHUMAN_API_KEY env override (and any redaction logic
# for api_key in snapshots / logs) already exists alongside the existing
# OPENHUMAN_API_URL plumbing.
fd -e rs . src/openhuman/config | xargs rg -nP '\bOPENHUMAN_API_(URL|KEY)\b|\bapi_key\b' -C2

Repository: tinyhumansai/openhuman

Length of output: 4142


🏁 Script executed:

#!/bin/bash
# Search for config load.rs or similar and env var override handling
fd -e rs . src/openhuman/config | head -20

Repository: tinyhumansai/openhuman

Length of output: 839


🏁 Script executed:

#!/bin/bash
# Search for OPENHUMAN_ env vars more broadly (without word boundaries)
rg 'OPENHUMAN_' src/openhuman/config/ --type rs -C2

Repository: tinyhumansai/openhuman

Length of output: 93


🏁 Script executed:

#!/bin/bash
# Look for where OPENHUMAN_API_URL is being loaded/set
rg 'api_url|API_URL' src/openhuman/config/ --type rs -C2 | head -100

Repository: tinyhumansai/openhuman

Length of output: 93


🏁 Script executed:

#!/bin/bash
# Examine load.rs where config loading likely happens
cat -n src/openhuman/config/schema/load.rs

Repository: tinyhumansai/openhuman

Length of output: 50378


🏁 Script executed:

#!/bin/bash
# Search for env var handling in config files - broader search
rg 'env::var|std::env|OPENHUMAN|api_key|api_url' src/openhuman/config/

Repository: tinyhumansai/openhuman

Length of output: 26871


🏁 Script executed:

#!/bin/bash
# Specifically search for OPENHUMAN_API_URL (not just api_url field)
rg 'OPENHUMAN_API_URL|OPENHUMAN_API_KEY' src/

Repository: tinyhumansai/openhuman

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Check the exact location where api_url field is in types.rs and verify no doc comments
cat -n src/openhuman/config/schema/types.rs | grep -A 5 -B 5 'pub api_url\|pub api_key'

Repository: tinyhumansai/openhuman

Length of output: 476


Add a doc comment to api_key documenting it as a secret field.

The api_key field lacks documentation. Add a comment marking it as sensitive and advising against logging or including it in snapshots returned to remote callers:

    pub api_url: Option<String>,
+   /// Optional API key for the upstream backend / OpenAI-compatible provider.
+   /// **Secret** — never log this value or include it in snapshots returned to
+   /// remote callers.
    pub api_key: Option<String>,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schema/types.rs` at line 32, Add a doc comment above the
api_key field (the field named api_key: Option<String>) marking it as
sensitive/secret and advising not to log it or include it in snapshots returned
to remote callers; use a triple-slash doc comment (///) immediately above the
api_key declaration, briefly stating it contains a secret API key and must not
be logged, printed, or exposed in remote snapshots.

Comment thread src/openhuman/config/schemas.rs Outdated
Comment on lines 151 to 168
pub fn create_backend_inference_provider(
api_url: Option<&str>,
api_key: Option<&str>,
options: &ProviderRuntimeOptions,
) -> anyhow::Result<Box<dyn Provider>> {
Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
api_url, options,
)))
if let (Some(url), Some(key)) = (api_url, api_key) {
Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new(
"custom_openai",
url,
Some(key),
crate::openhuman::providers::compatible::AuthStyle::Bearer,
)))
} else {
Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
api_url, options,
)))
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent ignore when only one of api_url / api_key is provided + missing branch logging.

Two related concerns:

  1. If a caller passes api_key=Some(...) but api_url=None, the key is silently dropped and the function returns the default backend provider. That's likely a misconfiguration the operator wants to know about. Consider warning/erroring instead of silently falling through.
  2. Per the repo's logging conventions, this branch (custom OpenAI-compatible vs. backend) should emit a tracing::debug! (or info!) with a stable prefix so it's grep-able.
♻️ Suggested fix
 pub fn create_backend_inference_provider(
     api_url: Option<&str>,
     api_key: Option<&str>,
     options: &ProviderRuntimeOptions,
 ) -> anyhow::Result<Box<dyn Provider>> {
-    if let (Some(url), Some(key)) = (api_url, api_key) {
+    if api_key.is_some() && api_url.is_none() {
+        tracing::warn!(
+            "[providers] api_key configured without api_url; falling back to OpenHuman backend and ignoring key"
+        );
+    }
+    if let (Some(url), Some(key)) = (api_url, api_key) {
+        tracing::debug!(target: "providers", "[providers] using OpenAI-compatible custom backend");
         Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new(
             "custom_openai",
             url,
             Some(key),
             crate::openhuman::providers::compatible::AuthStyle::Bearer,
         )))
     } else {
+        tracing::debug!(target: "providers", "[providers] using OpenHuman backend provider");
         Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
             api_url, options,
         )))
     }
 }

As per coding guidelines: "Log entry/exit, branches, external calls, retries/timeouts, state transitions, errors with stable grep-friendly prefixes ([domain], [rpc], [ui-flow]) and correlation fields. Never log secrets or full PII."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn create_backend_inference_provider(
api_url: Option<&str>,
api_key: Option<&str>,
options: &ProviderRuntimeOptions,
) -> anyhow::Result<Box<dyn Provider>> {
Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
api_url, options,
)))
if let (Some(url), Some(key)) = (api_url, api_key) {
Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new(
"custom_openai",
url,
Some(key),
crate::openhuman::providers::compatible::AuthStyle::Bearer,
)))
} else {
Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
api_url, options,
)))
}
}
pub fn create_backend_inference_provider(
api_url: Option<&str>,
api_key: Option<&str>,
options: &ProviderRuntimeOptions,
) -> anyhow::Result<Box<dyn Provider>> {
if api_key.is_some() && api_url.is_none() {
tracing::warn!(
"[providers] api_key configured without api_url; falling back to OpenHuman backend and ignoring key"
);
}
if let (Some(url), Some(key)) = (api_url, api_key) {
tracing::debug!(target: "providers", "[providers] using OpenAI-compatible custom backend");
Ok(Box::new(crate::openhuman::providers::compatible::OpenAiCompatibleProvider::new(
"custom_openai",
url,
Some(key),
crate::openhuman::providers::compatible::AuthStyle::Bearer,
)))
} else {
tracing::debug!(target: "providers", "[providers] using OpenHuman backend provider");
Ok(Box::new(openhuman_backend::OpenHumanBackendProvider::new(
api_url, options,
)))
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/providers/ops.rs` around lines 151 - 168, The function
create_backend_inference_provider silently falls back to
OpenHumanBackendProvider when only one of api_url or api_key is provided and
emits no log about which branch was chosen; change it to detect the
partial-config case and emit a tracing::warn! (or return an error) indicating
the misconfiguration, and add a stable tracing::debug! (or info!) with a
grep-friendly prefix like "[providers] " before returning either the
OpenAiCompatibleProvider or OpenHumanBackendProvider so callers see which branch
was selected; update references in this function
(create_backend_inference_provider) and the provider constructors
(OpenAiCompatibleProvider::new and OpenHumanBackendProvider::new) usage to avoid
logging secrets (log presence of key/url only as booleans or masked) and keep
behavior otherwise unchanged.

Comment thread src/openhuman/routing/factory.rs Outdated
Comment on lines +69 to +73
let auth_style = if local_ai_config.api_key.is_some() {
AuthStyle::Bearer
} else {
AuthStyle::Bearer
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Dead conditional — both branches return AuthStyle::Bearer.

let auth_style = if local_ai_config.api_key.is_some() {
    AuthStyle::Bearer
} else {
    AuthStyle::Bearer
};

This is equivalent to let auth_style = AuthStyle::Bearer;. Either remove the conditional or, if the intent is to support a no-auth style when api_key is None, introduce that variant explicitly.

♻️ Suggested simplification
-    let auth_style = if local_ai_config.api_key.is_some() {
-        AuthStyle::Bearer
-    } else {
-        AuthStyle::Bearer
-    };
-
     let local: Box<dyn Provider> = Box::new(OpenAiCompatibleProvider::new(
         provider_label,
         &local_base,
         local_ai_config.api_key.as_deref(), // local servers do not require authentication, but custom endpoints might
-        auth_style,
+        AuthStyle::Bearer,
     ));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/routing/factory.rs` around lines 69 - 73, The conditional
assigning auth_style is dead because both branches return AuthStyle::Bearer;
replace the if expression with a direct assignment let auth_style =
AuthStyle::Bearer; or, if the intended behavior is to permit no-auth when
local_ai_config.api_key is None, add an appropriate enum variant (e.g.,
AuthStyle::None or AuthStyle::NoAuth) to AuthStyle and set auth_style = if
local_ai_config.api_key.is_some() { AuthStyle::Bearer } else { AuthStyle::NoAuth
}, updating any code that consumes auth_style (in the routing/factory logic that
constructs requests) to handle the new variant.

Comment on lines 75 to 80
let local: Box<dyn Provider> = Box::new(OpenAiCompatibleProvider::new(
provider_label,
&local_base,
None, // local servers do not require authentication
AuthStyle::Bearer,
local_ai_config.api_key.as_deref(), // local servers do not require authentication, but custom endpoints might
auth_style,
));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ollama branch now also forwards local_ai_config.api_key — verify intent.

The provider construction at lines 75–80 is shared between the OpenAI-compat branch and the Ollama branch. Previously the Ollama path always passed None; now it forwards local_ai_config.api_key.as_deref(), which means an Authorization: Bearer … header will be attached to Ollama requests when the user configured a key. If the key is intended only for custom_openai/llama-server, gate it on use_openai_compat_local.

♻️ Suggested gating
+    let local_api_key = if use_openai_compat_local {
+        local_ai_config.api_key.as_deref()
+    } else {
+        None
+    };
     let local: Box<dyn Provider> = Box::new(OpenAiCompatibleProvider::new(
         provider_label,
         &local_base,
-        local_ai_config.api_key.as_deref(),
+        local_api_key,
         AuthStyle::Bearer,
     ));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/routing/factory.rs` around lines 75 - 80, The Ollama branch now
inadvertently forwards local_ai_config.api_key into
OpenAiCompatibleProvider::new causing Authorization headers to be sent for
Ollama; update the factory so that when constructing the provider for the Ollama
path you only pass local_ai_config.api_key.as_deref() if use_openai_compat_local
is true, otherwise pass None (keep the existing behavior for the
OpenAI-compat/local path by continuing to forward the key); locate the call to
OpenAiCompatibleProvider::new where local_base/local_ai_config.api_key are used
and add the conditional gate around the api_key argument.

senamakel
senamakel previously approved these changes Apr 29, 2026
- Fix typo: update.hotoken → update.hotkey in config/schemas.rs
- Remove unused vi import from configPersistence.test.ts
- Delete scratch.js (leftover debug file)
- Remove dead auth_style if/else branch in routing/factory.rs
- Add clearCoreRpcUrlCache() to coreRpcClient.ts and wire into Welcome.tsx

All checks pass: typecheck, lint, format, unit tests (985 passed), cargo check (tauri shell clean; core errors are pre-existing).
- configPersistence.ts: import CORE_RPC_URL from config.ts instead of
  duplicating the hardcoded default to prevent drift
- Welcome.tsx: rename connectionTested -> saveSuccess to distinguish
  save-only from tested-connection feedback; narrow HTTP 400 acceptance
  to 405 only (more precise JSON-RPC reachability signal)
- config/ops.rs: trim api_key before persisting so pasted keys with
  leading/trailing whitespace don't cause auth failures
- providers/ops.rs: log warning when api_key is provided without api_url
  so the silent drop is visible in diagnostics
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
app/src/pages/Welcome.tsx (1)

71-105: ⚠️ Potential issue | 🟡 Minor

Use separate feedback for save vs. test.

handleTestConnection still flips saveSuccess, so a failed retry can leave the stale “URL saved successfully.” banner visible from the previous save. Split the save/test outcomes into distinct state, or clear the save banner before testing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/pages/Welcome.tsx` around lines 71 - 105, handleTestConnection
currently sets setSaveSuccess(true) when the endpoint responds, which lets a
previous successful save banner persist after a failed retry; change the
behavior so testing and saving are distinct: either clear the saved-success
banner at the start of testing (call setSaveSuccess(false) before network call)
or introduce a separate test state (e.g., setTestSuccess) and only call
setSaveSuccess(true) from the save flow (storeRpcUrl), while
handleTestConnection only updates test-specific state (and still uses
setRpcUrlError/setIsTestingConnection as now).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@app/src/pages/Welcome.tsx`:
- Around line 71-105: handleTestConnection currently sets setSaveSuccess(true)
when the endpoint responds, which lets a previous successful save banner persist
after a failed retry; change the behavior so testing and saving are distinct:
either clear the saved-success banner at the start of testing (call
setSaveSuccess(false) before network call) or introduce a separate test state
(e.g., setTestSuccess) and only call setSaveSuccess(true) from the save flow
(storeRpcUrl), while handleTestConnection only updates test-specific state (and
still uses setRpcUrlError/setIsTestingConnection as now).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7ece002a-3414-4986-b212-3be59e85ce75

📥 Commits

Reviewing files that changed from the base of the PR and between 8abd08c and 72fa768.

📒 Files selected for processing (12)
  • .github/workflows/build.yml
  • CHANGELOG.md
  • app/src/components/settings/panels/LocalModelDebugPanel.tsx
  • app/src/components/settings/panels/local-model/CustomModelSection.tsx
  • app/src/pages/Welcome.tsx
  • app/src/services/coreRpcClient.ts
  • app/src/utils/__tests__/configPersistence.test.ts
  • app/src/utils/configPersistence.ts
  • scripts/upload_sentry_symbols.sh
  • src/openhuman/agent/triage/routing.rs
  • src/openhuman/channels/routes.rs
  • src/openhuman/channels/runtime/startup.rs
✅ Files skipped from review due to trivial changes (3)
  • .github/workflows/build.yml
  • app/src/components/settings/panels/LocalModelDebugPanel.tsx
  • scripts/upload_sentry_symbols.sh
🚧 Files skipped from review as they are similar to previous changes (7)
  • src/openhuman/channels/routes.rs
  • src/openhuman/agent/triage/routing.rs
  • src/openhuman/channels/runtime/startup.rs
  • CHANGELOG.md
  • app/src/components/settings/panels/local-model/CustomModelSection.tsx
  • app/src/utils/configPersistence.ts
  • app/src/services/coreRpcClient.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
src/openhuman/config/schemas.rs (2)

590-591: Rename local variable to camelCase for repo consistency.

Line 590 uses app_version; this repo rule requires camelCase variable names.

Suggested rename
-        let app_version =
+        let appVersion =
             std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string());
@@
-                "app_version": app_version,
+                "app_version": appVersion,
As per coding guidelines: `src/openhuman/**/*.rs`: “Use camelCase for variable names”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schemas.rs` around lines 590 - 591, Rename the local
variable app_version to camelCase (e.g., appVersion) to follow repo naming
conventions: change the let binding that calls
std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string())
to use appVersion and update all subsequent references in the same scope
(functions/struct initializers) to the new name so compilation succeeds and
style is consistent.

587-601: Add [rpc] debug breadcrumbs for the new handler path.

This new RPC flow has no entry/exit diagnostics, which makes tracing failures harder.

Suggested logging additions
 fn handle_get_client_config(_params: Map<String, Value>) -> ControllerFuture {
     Box::pin(async move {
+        log::debug!("[rpc][config.get_client_config] start");
         let config = config_rpc::load_config_with_timeout().await?;
         let app_version =
             std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string());
-        to_json(RpcOutcome::new(
+        let outcome = RpcOutcome::new(
             serde_json::json!({
                 "api_url": config.api_url,
                 "default_model": config.default_model,
                 "app_version": app_version,
             }),
             vec!["client config read".to_string()],
-        ))
+        );
+        log::debug!("[rpc][config.get_client_config] success");
+        to_json(outcome)
     })
 }
As per coding guidelines: `src/openhuman/**/*.rs` must log entry/exit and key flow points with stable prefixes (e.g., `[rpc]`) and use `log`/`tracing` at `debug`/`trace` level.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schemas.rs` around lines 587 - 601, Add structured
entry/exit debug breadcrumbs to the new RPC handler by logging at the start and
before return using the stable "[rpc]" prefix. Inside handle_get_client_config,
emit a debug (or trace) log like "[rpc] handle_get_client_config entry"
immediately on entry, log the result of config_rpc::load_config_with_timeout (or
app_version) at trace/debug after it returns (e.g., "[rpc] loaded client config:
{api_url, default_model, app_version}"), and emit an exit breadcrumb like "[rpc]
handle_get_client_config exit" just before calling to_json; use the project's
log/tracing macros (log::debug! or tracing::debug!/trace!) so diagnostics are
consistent with other src/openhuman handlers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/openhuman/composio/periodic.rs`:
- Around line 278-300: The test currently sets OPENHUMAN_WORKSPACE unsafely and
unconditionally removes it at the end, which leaks state on panic and clobbers
any prior value; capture the prior value with std::env::var_os before changing
it, set OPENHUMAN_WORKSPACE to the tempdir, and create an RAII restore guard (a
small struct with the captured Option<OsString> and a Drop impl) that on Drop
restores the original value (set_var if Some, remove_var if None); remove the
unsafe blocks around set_var/remove_var and ensure the guard lives for the
duration of the test so run_one_tick and ENV_LOCK remain unchanged.

In `@src/openhuman/config/schemas.rs`:
- Line 237: The schema description string currently claims it returns “feature
flags” but the actual response only includes api_url, default_model, and
app_version; update the description text in the schema declaration to accurately
list those fields (e.g., "Read safe client-facing config fields: api_url,
default_model, app_version. No secrets.") so it matches the output; locate and
edit the description literal in the OpenAPI/schema variable that defines this
response (the string containing "Read safe client-facing config fields (api_url,
feature flags). No secrets.") and replace it with the corrected wording
referencing api_url, default_model, and app_version.
- Around line 587-601: Add an E2E JSON-RPC test that exercises the registered
method "config.get_client_config" and verifies the response fields from
handle_get_client_config: ensure the returned JSON contains "api_url",
"default_model", and "app_version" with expected values; add the test into
tests/json_rpc_e2e.rs, send a JSON-RPC request for the
"config.get_client_config" method, await the response, and assert the presence
and types/values of those keys (set OPENHUMAN_APP_VERSION in the test
environment if you need a deterministic app_version); this ensures the handler
(handle_get_client_config) is covered end-to-end.

---

Nitpick comments:
In `@src/openhuman/config/schemas.rs`:
- Around line 590-591: Rename the local variable app_version to camelCase (e.g.,
appVersion) to follow repo naming conventions: change the let binding that calls
std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string())
to use appVersion and update all subsequent references in the same scope
(functions/struct initializers) to the new name so compilation succeeds and
style is consistent.
- Around line 587-601: Add structured entry/exit debug breadcrumbs to the new
RPC handler by logging at the start and before return using the stable "[rpc]"
prefix. Inside handle_get_client_config, emit a debug (or trace) log like "[rpc]
handle_get_client_config entry" immediately on entry, log the result of
config_rpc::load_config_with_timeout (or app_version) at trace/debug after it
returns (e.g., "[rpc] loaded client config: {api_url, default_model,
app_version}"), and emit an exit breadcrumb like "[rpc] handle_get_client_config
exit" just before calling to_json; use the project's log/tracing macros
(log::debug! or tracing::debug!/trace!) so diagnostics are consistent with other
src/openhuman handlers.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c9ea73e3-3b3e-473e-b1cc-fa7ac28b36f1

📥 Commits

Reviewing files that changed from the base of the PR and between 72fa768 and 9b38075.

📒 Files selected for processing (5)
  • src/openhuman/agent/harness/session/builder.rs
  • src/openhuman/composio/periodic.rs
  • src/openhuman/config/ops_tests.rs
  • src/openhuman/config/schemas.rs
  • src/openhuman/threads/ops.rs
✅ Files skipped from review due to trivial changes (1)
  • src/openhuman/config/ops_tests.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/openhuman/threads/ops.rs
  • src/openhuman/agent/harness/session/builder.rs

Comment on lines +278 to +300
// Isolate the workspace/env so config loading doesn't contend with
// sibling tests mutating OPENHUMAN_WORKSPACE in parallel.
let _guard = ENV_LOCK.lock().expect("env lock");
let tmp = tempdir().expect("tempdir");
unsafe {
std::env::set_var("OPENHUMAN_WORKSPACE", tmp.path());
}

// With no session stored in the isolated workspace,
// `build_composio_client` returns None and the tick should
// silently skip (returning Ok). This covers the early-return
// path that's otherwise only hit in production.
let inner = tokio::time::timeout(Duration::from_secs(5), run_one_tick())
.await
.expect("run_one_tick should not hang indefinitely during tests");
assert!(
inner.is_ok(),
"run_one_tick should return Ok when no client is available: {inner:?}"
);

unsafe {
std::env::remove_var("OPENHUMAN_WORKSPACE");
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make OPENHUMAN_WORKSPACE restoration panic-safe and value-preserving.

Line 299 always removes the variable instead of restoring any prior value, and cleanup is skipped if the test panics before the final block. This can leak state into later tests.

🔧 Suggested fix
     #[tokio::test]
     async fn run_one_tick_returns_ok_when_no_client() {
         // Isolate the workspace/env so config loading doesn't contend with
         // sibling tests mutating OPENHUMAN_WORKSPACE in parallel.
         let _guard = ENV_LOCK.lock().expect("env lock");
         let tmp = tempdir().expect("tempdir");
+        let previousWorkspace = std::env::var_os("OPENHUMAN_WORKSPACE");
+        struct WorkspaceEnvRestore(Option<std::ffi::OsString>);
+        impl Drop for WorkspaceEnvRestore {
+            fn drop(&mut self) {
+                unsafe {
+                    match self.0.take() {
+                        Some(value) => std::env::set_var("OPENHUMAN_WORKSPACE", value),
+                        None => std::env::remove_var("OPENHUMAN_WORKSPACE"),
+                    }
+                }
+            }
+        }
         unsafe {
             std::env::set_var("OPENHUMAN_WORKSPACE", tmp.path());
         }
+        let _workspaceEnvRestore = WorkspaceEnvRestore(previousWorkspace);
 
         // With no session stored in the isolated workspace,
         // `build_composio_client` returns None and the tick should
         // silently skip (returning Ok). This covers the early-return
         // path that's otherwise only hit in production.
@@
         assert!(
             inner.is_ok(),
             "run_one_tick should return Ok when no client is available: {inner:?}"
         );
-
-        unsafe {
-            std::env::remove_var("OPENHUMAN_WORKSPACE");
-        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/composio/periodic.rs` around lines 278 - 300, The test
currently sets OPENHUMAN_WORKSPACE unsafely and unconditionally removes it at
the end, which leaks state on panic and clobbers any prior value; capture the
prior value with std::env::var_os before changing it, set OPENHUMAN_WORKSPACE to
the tempdir, and create an RAII restore guard (a small struct with the captured
Option<OsString> and a Drop impl) that on Drop restores the original value
(set_var if Some, remove_var if None); remove the unsafe blocks around
set_var/remove_var and ensure the guard lives for the duration of the test so
run_one_tick and ENV_LOCK remain unchanged.

"get_client_config" => ControllerSchema {
namespace: "config",
function: "get_client_config",
description: "Read safe client-facing config fields (api_url, feature flags). No secrets.",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Schema description overstates the contract.

Line 237 says “feature flags”, but outputs only define api_url, default_model, and app_version. Please align description text with actual response fields.

Suggested text fix
-            description: "Read safe client-facing config fields (api_url, feature flags). No secrets.",
+            description: "Read safe client-facing config fields (api_url, default_model, app_version). No secrets.",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
description: "Read safe client-facing config fields (api_url, feature flags). No secrets.",
description: "Read safe client-facing config fields (api_url, default_model, app_version). No secrets.",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schemas.rs` at line 237, The schema description string
currently claims it returns “feature flags” but the actual response only
includes api_url, default_model, and app_version; update the description text in
the schema declaration to accurately list those fields (e.g., "Read safe
client-facing config fields: api_url, default_model, app_version. No secrets.")
so it matches the output; locate and edit the description literal in the
OpenAPI/schema variable that defines this response (the string containing "Read
safe client-facing config fields (api_url, feature flags). No secrets.") and
replace it with the corrected wording referencing api_url, default_model, and
app_version.

Comment on lines +587 to +601
fn handle_get_client_config(_params: Map<String, Value>) -> ControllerFuture {
Box::pin(async move {
let config = config_rpc::load_config_with_timeout().await?;
let app_version =
std::env::var("OPENHUMAN_APP_VERSION").unwrap_or_else(|_| "unknown".to_string());
to_json(RpcOutcome::new(
serde_json::json!({
"api_url": config.api_url,
"default_model": config.default_model,
"app_version": app_version,
}),
vec!["client config read".to_string()],
))
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Locating json_rpc_e2e.rs ..."
E2E_FILE="$(fd -i '^json_rpc_e2e\.rs$' tests src | head -n1 || true)"
if [[ -z "${E2E_FILE}" ]]; then
  echo "ERROR: json_rpc_e2e.rs not found under tests/ or src/"
  exit 1
fi
echo "Found: ${E2E_FILE}"

echo
echo "Checking for get_client_config coverage..."
rg -n -C3 'get_client_config|config_get_client_config|openhuman\.config_get_client_config' "${E2E_FILE}" || true

echo
echo "Method registered in schemas.rs:"
rg -n -C2 '"get_client_config"|namespace:\s*"config"|function:\s*"get_client_config"' src/openhuman/config/schemas.rs

Repository: tinyhumansai/openhuman

Length of output: 6171


Add JSON-RPC E2E test coverage for config.get_client_config.

The endpoint is properly registered in src/openhuman/config/schemas.rs (lines 103–238) and implemented as handle_get_client_config, but it has no test coverage in tests/json_rpc_e2e.rs. According to the feature design workflow, JSON-RPC E2E tests must be in place before UI implementation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/config/schemas.rs` around lines 587 - 601, Add an E2E JSON-RPC
test that exercises the registered method "config.get_client_config" and
verifies the response fields from handle_get_client_config: ensure the returned
JSON contains "api_url", "default_model", and "app_version" with expected
values; add the test into tests/json_rpc_e2e.rs, send a JSON-RPC request for the
"config.get_client_config" method, await the response, and assert the presence
and types/values of those keys (set OPENHUMAN_APP_VERSION in the test
environment if you need a deterministic app_version); this ensures the handler
(handle_get_client_config) is covered end-to-end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants