feat(integrations): add agent integration tools for Twilio, Google Places, and Parallel#375
Conversation
…aces, and Parallel Add a new `src/openhuman/integrations/` module with 5 backend-proxied tools that give agents access to phone calls, location search, and web search/extraction. Each tool calls the backend API which handles external API keys, billing, rate limiting, and markup — no pricing logic on the client side. Tools added: - TwilioCallTool (scope: CLI/RPC only — requires explicit user action) - GooglePlacesSearchTool, GooglePlacesDetailsTool (scope: All) - ParallelSearchTool, ParallelExtractTool (scope: All) Includes IntegrationsConfig with per-integration toggles, shared IntegrationClient with pricing cache (fetched from backend GET /agent-integrations/pricing), ToolScope enum for future scope-based filtering, and 33 unit tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…aces, and Parallel Add a new `src/openhuman/integrations/` module with 5 backend-proxied tools that give agents access to phone calls, location search, and web search/extraction. Each tool calls the backend API which handles external API keys, billing, rate limiting, and markup — no pricing logic on the client side. Tools added: - TwilioCallTool (scope: CLI/RPC only — requires explicit user action) - GooglePlacesSearchTool, GooglePlacesDetailsTool (scope: All) - ParallelSearchTool, ParallelExtractTool (scope: All) Includes IntegrationsConfig with per-integration toggles, shared IntegrationClient with pricing cache (fetched from backend GET /agent-integrations/pricing), ToolScope enum for future scope-based filtering, and 33 unit tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a backend-proxied integrations subsystem: new config types and re-exports, an IntegrationClient with HTTP helpers and pricing cache, three integration tools (Google Places, Parallel, Twilio), runtime conditional tool registration, and a ToolScope gate preventing CLI-only tools from running inside the agent loop. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent Loop
participant Tool as Integration Tool
participant Client as IntegrationClient
participant Backend as Backend API
Agent->>Tool: execute(args: JSON)
activate Tool
Tool->>Tool: validate args & check scope()
alt denied by scope
Tool-->>Agent: ToolResult::error (denied)
else allowed
Tool->>Client: post("/agent-integrations/{svc}/{op}", body)
activate Client
Client->>Backend: HTTP POST (Authorization: Bearer ..., JSON)
activate Backend
Backend-->>Client: { success, data, error }
deactivate Backend
Client->>Client: check status & envelope, update cache if pricing
alt client error / parse fail
Client-->>Tool: Err(...)
else ok
Client-->>Tool: Ok(data)
end
deactivate Client
Tool->>Tool: format response
Tool-->>Agent: ToolResult::success(formatted)
end
deactivate Tool
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Combine IntegrationToggle/IntegrationsConfig from feat branch with UpdateConfig from upstream/main. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (5)
src/openhuman/integrations/google_places.rs (2)
254-254: Consider using debug level for place ID logging.While place IDs themselves aren't PII, consistent use of
debug!for request-level logging aligns with the coding guidelines.🔧 Proposed change
- tracing::info!("[google_places_details] placeId={}", place_id); + tracing::debug!("[google_places_details] placeId={}", place_id);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/google_places.rs` at line 254, The log at tracing::info!("[google_places_details] placeId={}", place_id) should use debug level per logging guidelines; update that call to tracing::debug! so request-level place ID logging is emitted at debug level (locate the logging call in the google_places_details handling and replace info! with debug!).
156-156: Consider using debug level for query logging.Search queries could inadvertently contain PII (e.g., "restaurants near 123 Main St" or "doctors near John Smith's address"). Using
debug!instead ofinfo!reduces exposure in production logs.🔧 Proposed change
- tracing::info!("[google_places_search] query={:?}", query); + tracing::debug!("[google_places_search] query={:?}", query);As per coding guidelines: "Use
log/tracingatdebugortracelevel for Rust debug logging".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/google_places.rs` at line 156, The log statement tracing::info!("[google_places_search] query={:?}", query) currently emits search queries at info level which may expose PII; change it to tracing::debug! so queries are only logged at debug level (locate the call in google_places.rs where tracing::info! is invoked for the google_places_search and replace with tracing::debug!), ensuring any structured fields (like query) are preserved in the new debug log.src/openhuman/tools/ops.rs (1)
197-223: Consider adding debug logging when integrations are disabled or client creation fails.The registration logic is clean, but there's no visibility when
build_clientreturnsNone(e.g., due to missingbackend_urlorauth_token). This makes debugging configuration issues harder.🔧 Proposed enhancement for observability
// ── Agent integration tools (backend-proxied) ───────────────── - if let Some(client) = crate::openhuman::integrations::build_client(&root_config.integrations) { + if root_config.integrations.enabled { + if let Some(client) = crate::openhuman::integrations::build_client(&root_config.integrations) { if root_config.integrations.google_places.enabled { // ... existing registration ... } // ... rest of registrations ... + } else { + tracing::debug!( + "[integrations] enabled but client creation failed (missing backend_url or auth_token?)" + ); + } + } else { + tracing::trace!("[integrations] master switch disabled, skipping integration tools"); }As per coding guidelines: "add substantial, development-oriented logs on new/changed flows including entry/exit points, branch decisions".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/ops.rs` around lines 197 - 223, When calling crate::openhuman::integrations::build_client(&root_config.integrations) add development-oriented tracing.debug/tracing::warn logs for the failure and branch decisions: log a message when build_client returns None (include why if available from config such as missing backend_url/auth_token), and inside each integrations.enabled branch add an else branch that logs that the specific integration (google_places, parallel, twilio) is disabled so callers can see why tools were not registered; reference the build_client call and the GooglePlacesSearchTool/GooglePlacesDetailsTool, ParallelSearchTool/ParallelExtractTool, and TwilioCallTool registration points to insert these logs.src/openhuman/integrations/mod.rs (1)
7-194: Keepintegrations/mod.rsexport-focused.This
mod.rsnow owns transport, pricing, DTOs, and client construction. Moving the operational pieces into sibling files (client.rs,types.rs,pricing.rs, etc.) and re-exporting them here would align the module with the repo's boundary rule.As per coding guidelines, "Keep domain
mod.rsfiles light and export-focused; place operational code in sibling files (e.g.,ops.rs,store.rs,schedule.rs,types.rs), then re-export the public API frommod.rs".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/mod.rs` around lines 7 - 194, The file currently mixes exports and operational implementation; split the heavy operational pieces into focused modules and re-export them from mod.rs: move IntegrationClient and its impl (new, post, get, pricing) into a client.rs, move IntegrationPricing, PricingIntegrations, IntegrationPricingEntry, BackendResponse, and ToolScope into types.rs (or pricing.rs for pricing-specific DTOs), and move build_client into a helpers or factory module; then in mod.rs keep only the pub mod declarations and pub use re-exports (e.g., pub use client::IntegrationClient; pub use types::{IntegrationPricing, BackendResponse, ToolScope}; pub use client::build_client) so mod.rs remains export-focused while functionality lives in the new sibling files.src/openhuman/integrations/parallel.rs (1)
1-566: Splitparallel.rsbefore it grows further.This new file is already 566 lines with DTOs, two tool implementations, and inline tests. Pulling one tool or the shared formatting/types into sibling modules will make the integration surface easier to review and maintain.
As per coding guidelines, "Prefer ≤ ~500 lines per Rust source file; split modules when growing".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/parallel.rs` around lines 1 - 566, The file is too large and should be split: extract the DTOs (SearchResponse, SearchResultItem, ExtractResponse, ExtractResultItem, ExtractError) into a new sibling module (e.g., parallel_models.rs) and move each tool implementation into its own module (parallel_search.rs containing ParallelSearchTool and parallel_extract.rs containing ParallelExtractTool), update the parent module to declare pub mod parallel_models; pub mod parallel_search; pub mod parallel_extract; and adjust imports to use crate::openhuman::integrations::parallel_models::{...} (or relative paths) and to import IntegrationClient, Tool, ToolResult, ToolScope as needed; ensure tests are moved or re-exported appropriately and visibility (pub(crate)/pub) is updated so the code compiles unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/integrations/mod.rs`:
- Around line 120-122: The debug log and error currently include the full
backend response body (body_text) which may contain PII; instead, stop logging
raw body_text and log a safe redaction or summary (e.g., "<redacted-response>"
or a fixed-length hash/summary). Update the code paths that call
resp.text().await.unwrap_or_default(), tracing::debug!("[integrations] POST {} →
{} {}", url, status, body_text) and anyhow::bail!("Backend returned {}: {}",
status, body_text) to use a redacted/summary string (or omit the body entirely)
before logging or bailing; apply the same change for the similar block
referenced by lines 143-145 so no raw response body is emitted.
- Around line 62-71: The current post() and get() helpers deserialize responses
into BackendResponse<T> but only check HTTP status, then return data
unconditionally; update post(), get(), and any similar callers (the ones that
deserialize to BackendResponse<T>) to inspect BackendResponse::success and if
false return an Err containing the backend error message
(BackendResponse::error) or a clear error constructed from it instead of
returning data, otherwise return the contained data; locate the
BackendResponse<T> type and the post()/get() functions to implement this
success-check and error propagation consistently.
- Around line 21-31: Add a new method scope() to the Tool trait that returns
ToolScope with a default implementation returning ToolScope::All so boxed tools
(Box<dyn Tool>) can expose their declared scope; update concrete tool impls that
need restrictions (e.g., TwilioCallTool::scope()) to override and return
ToolScope::CliRpcOnly; then modify the agent loop in tool_loop.rs to query
tool.scope() before executing a tool and skip/deny execution when the scope
disallows agent execution; references to locate edits: Tool trait, ToolScope
enum, TwilioCallTool, ops.rs (where tools are boxed), and tool_loop.rs (where to
enforce).
In `@src/openhuman/integrations/parallel.rs`:
- Around line 401-408: The fallback "No content extracted..." is unreachable
because you push the cost line before checking emptiness; adjust the logic in
the function that builds the response (the block manipulating lines and using
resp.cost_usd) to test lines.is_empty() before appending the cost, and return
ToolResult::success("No content extracted from the provided URLs.") when empty;
alternatively, if you want to always show cost, return a combined message that
includes both the "No content extracted..." text and the cost instead of only
appending the cost line.
- Around line 204-208: The current tracing::info! call in parallel_search is
emitting the raw free-form user input variable objective at info level; change
it to avoid logging full PII by removing objective from the info log and only
emitting non-sensitive derived fields (e.g., queries.len() and any mode flags)
at info level, and if you still want to record the objective for debugging
move/duplicate that part to a debug or trace call (tracing::debug! or
tracing::trace!) or redact it before logging; update the invocation that
references objective and queries.len() accordingly to use tracing::info!( "
[parallel_search] queries={}", queries.len()) and tracing::debug!( "
[parallel_search] objective_redacted={}", /*redacted*/ ) or tracing::debug!( "
[parallel_search] objective={:?}", objective ) as appropriate.
- Around line 160-177: The current code silently drops non-string/blank entries
when building `queries` from `search_queries` by using `filter_map(|v|
v.as_str())`; change this to validate every element and return a
`ToolResult::error` if any element is non-string or an empty string. Concretely:
after obtaining `search_queries` from `args.get("search_queries")`, iterate and
ensure each `Value` has a non-empty `as_str()` (fail the whole call via
`ToolResult::error` with an explanatory message if any element is
missing/invalid), then collect into `queries: Vec<&str>` only after validation.
Apply the same fix to the analogous `urls` validation block (the code around the
`urls` handling at ~316-329) so that any non-string or blank URL causes the
request to be rejected rather than silently dropped.
- Around line 231-239: The code currently slices &str by byte offsets (e.g.,
&text[..500] and &content[..MAX_CONTENT_CHARS]) which can panic on UTF-8
boundaries; add a UTF-8-safe helper like truncate_chars(s: &str, max_chars:
usize) -> (&str, bool) using char_indices().nth(max_chars) to find the byte
index, then replace occurrences where you build truncated in the excerpt
handling (the `excerpt`/`text`/`truncated` logic) and the content truncation
that uses MAX_CONTENT_CHARS to call truncate_chars and use the returned slice
and boolean to determine whether to append "..." instead of slicing by bytes.
In `@src/openhuman/integrations/twilio.rs`:
- Line 117: The call site currently logs the full destination phone number with
tracing::info!("[twilio_call] calling {}", to);; update the logging in the
twilio_call flow to avoid emitting full PII by either redacting the number
(e.g., show only last 4 digits or mask with asterisks) or demoting the message
to trace level, and replace the tracing::info! invocation with the chosen
approach so logs never contain the complete phone number.
---
Nitpick comments:
In `@src/openhuman/integrations/google_places.rs`:
- Line 254: The log at tracing::info!("[google_places_details] placeId={}",
place_id) should use debug level per logging guidelines; update that call to
tracing::debug! so request-level place ID logging is emitted at debug level
(locate the logging call in the google_places_details handling and replace info!
with debug!).
- Line 156: The log statement tracing::info!("[google_places_search]
query={:?}", query) currently emits search queries at info level which may
expose PII; change it to tracing::debug! so queries are only logged at debug
level (locate the call in google_places.rs where tracing::info! is invoked for
the google_places_search and replace with tracing::debug!), ensuring any
structured fields (like query) are preserved in the new debug log.
In `@src/openhuman/integrations/mod.rs`:
- Around line 7-194: The file currently mixes exports and operational
implementation; split the heavy operational pieces into focused modules and
re-export them from mod.rs: move IntegrationClient and its impl (new, post, get,
pricing) into a client.rs, move IntegrationPricing, PricingIntegrations,
IntegrationPricingEntry, BackendResponse, and ToolScope into types.rs (or
pricing.rs for pricing-specific DTOs), and move build_client into a helpers or
factory module; then in mod.rs keep only the pub mod declarations and pub use
re-exports (e.g., pub use client::IntegrationClient; pub use
types::{IntegrationPricing, BackendResponse, ToolScope}; pub use
client::build_client) so mod.rs remains export-focused while functionality lives
in the new sibling files.
In `@src/openhuman/integrations/parallel.rs`:
- Around line 1-566: The file is too large and should be split: extract the DTOs
(SearchResponse, SearchResultItem, ExtractResponse, ExtractResultItem,
ExtractError) into a new sibling module (e.g., parallel_models.rs) and move each
tool implementation into its own module (parallel_search.rs containing
ParallelSearchTool and parallel_extract.rs containing ParallelExtractTool),
update the parent module to declare pub mod parallel_models; pub mod
parallel_search; pub mod parallel_extract; and adjust imports to use
crate::openhuman::integrations::parallel_models::{...} (or relative paths) and
to import IntegrationClient, Tool, ToolResult, ToolScope as needed; ensure tests
are moved or re-exported appropriately and visibility (pub(crate)/pub) is
updated so the code compiles unchanged.
In `@src/openhuman/tools/ops.rs`:
- Around line 197-223: When calling
crate::openhuman::integrations::build_client(&root_config.integrations) add
development-oriented tracing.debug/tracing::warn logs for the failure and branch
decisions: log a message when build_client returns None (include why if
available from config such as missing backend_url/auth_token), and inside each
integrations.enabled branch add an else branch that logs that the specific
integration (google_places, parallel, twilio) is disabled so callers can see why
tools were not registered; reference the build_client call and the
GooglePlacesSearchTool/GooglePlacesDetailsTool,
ParallelSearchTool/ParallelExtractTool, and TwilioCallTool registration points
to insert these logs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 991e1af0-2c1c-4bbd-b081-6be8dfb48548
📒 Files selected for processing (10)
src/openhuman/config/mod.rssrc/openhuman/config/schema/mod.rssrc/openhuman/config/schema/tools.rssrc/openhuman/config/schema/types.rssrc/openhuman/integrations/google_places.rssrc/openhuman/integrations/mod.rssrc/openhuman/integrations/parallel.rssrc/openhuman/integrations/twilio.rssrc/openhuman/mod.rssrc/openhuman/tools/ops.rs
| /// Controls where an integration tool is available. | ||
| #[derive(Debug, Clone, Copy, PartialEq, Eq)] | ||
| pub enum ToolScope { | ||
| /// Available in agent loop, CLI, and RPC. | ||
| All, | ||
| /// Only available in the autonomous agent loop. | ||
| #[allow(dead_code)] | ||
| AgentOnly, | ||
| /// Only available via explicit CLI/RPC invocation (not autonomous agent). | ||
| CliRpcOnly, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Tool trait definition ==="
fd 'traits.rs' src/openhuman/tools --exec sed -n '1,220p' {}
echo
echo "=== ToolScope / scope usage ==="
rg -n --glob '*.rs' 'ToolScope|fn scope\s*\(' src/openhuman
echo
echo "=== Tool boxing / registry ==="
rg -n --glob '*.rs' 'Box<dyn Tool>|all_tools_with_runtime' src/openhumanRepository: tinyhumansai/openhuman
Length of output: 10462
🏁 Script executed:
cat -n src/openhuman/tools/ops.rs | sed -n '60,100p'Repository: tinyhumansai/openhuman
Length of output: 2184
🏁 Script executed:
rg -n 'scope\(\)' src/openhuman/integrations/ -A 1 -B 1Repository: tinyhumansai/openhuman
Length of output: 1616
🏁 Script executed:
rg -n 'collect_skill_tools\|default_tools' src/openhuman --type rs -B 2 -A 5Repository: tinyhumansai/openhuman
Length of output: 92
🏁 Script executed:
rg -n 'ToolScope::' src/openhuman --type rs -C 2Repository: tinyhumansai/openhuman
Length of output: 92
🏁 Script executed:
rg -n 'integration' src/openhuman/tools/ops.rs -iRepository: tinyhumansai/openhuman
Length of output: 1153
🏁 Script executed:
rg -n 'ParallelSearch\|TwilioSend\|GooglePlaces' src/openhuman --glob '*.rs'Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
rg -n 'ToolScope::AgentOnly\|ToolScope::CliRpcOnly\|filter.*scope\|scope.*filter' src/openhuman --glob '*.rs' -iRepository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
rg -n 'for.*tools_registry\|for.*tools\s*in' src/openhuman/agent --glob '*.rs' -A 3Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
rg -n 'execute.*tool\|find_tool' src/openhuman/agent --glob '*.rs' -B 2 -A 5Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
cat -n src/openhuman/agent/loop_/tool_loop.rs | head -100Repository: tinyhumansai/openhuman
Length of output: 4241
🏁 Script executed:
rg -n 'find_tool' src/openhuman/agent/loop_/tool_loop.rs -A 15Repository: tinyhumansai/openhuman
Length of output: 2128
🏁 Script executed:
cat -n src/openhuman/agent/loop_/parse.rs | sed -n '1,50p'Repository: tinyhumansai/openhuman
Length of output: 2001
🏁 Script executed:
rg -n 'CliRpcOnly\|AgentOnly' src/openhuman/agent --glob '*.rs'Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
rg -n 'scope\|Scope' src/openhuman/channels --glob '*.rs'Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
rg -n 'TwilioCallTool\|ParallelSearchTool' src/openhuman/agent --glob '*.rs'Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
cat -n src/openhuman/integrations/mod.rs | sed -n '1,50p'Repository: tinyhumansai/openhuman
Length of output: 2024
🏁 Script executed:
rg -n 'scope\(' src/openhuman/integrations --glob '*.rs' -B 3 -A 3Repository: tinyhumansai/openhuman
Length of output: 5084
🏁 Script executed:
cat -n src/openhuman/integrations/twilio.rs | sed -n '1,50p'Repository: tinyhumansai/openhuman
Length of output: 1708
🏁 Script executed:
rg -n 'TwilioCallTool\|all_tools_with_runtime' src/openhuman --glob '*.rs' | head -30Repository: tinyhumansai/openhuman
Length of output: 48
Add scope() method to the Tool trait to enable scope enforcement.
Integration tools (e.g., TwilioCallTool) declare scope constraints via concrete scope() methods—for example, TwilioCallTool::scope() returns ToolScope::CliRpcOnly to restrict it from autonomous agent execution. However, when these tools are boxed as Box<dyn Tool> in the registry (ops.rs:197–221), the scope() method is erased and inaccessible. The agent loop (tool_loop.rs) currently has no way to check or enforce these scope constraints, allowing CliRpcOnly tools to run in the agent loop in violation of their declared scope.
Add scope() as a method on the Tool trait (with a sensible default like ToolScope::All) so that scope constraints can be queried and enforced during tool execution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/integrations/mod.rs` around lines 21 - 31, Add a new method
scope() to the Tool trait that returns ToolScope with a default implementation
returning ToolScope::All so boxed tools (Box<dyn Tool>) can expose their
declared scope; update concrete tool impls that need restrictions (e.g.,
TwilioCallTool::scope()) to override and return ToolScope::CliRpcOnly; then
modify the agent loop in tool_loop.rs to query tool.scope() before executing a
tool and skip/deny execution when the scope disallows agent execution;
references to locate edits: Tool trait, ToolScope enum, TwilioCallTool, ops.rs
(where tools are boxed), and tool_loop.rs (where to enforce).
| if root_config.integrations.twilio.enabled { | ||
| tools.push(Box::new( | ||
| crate::openhuman::integrations::TwilioCallTool::new(Arc::clone(&client)), | ||
| )); | ||
| tracing::debug!("[integrations] registered twilio tools"); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if ToolScope is used for filtering anywhere in the codebase
# Expected: Find code that filters tools by scope, or confirm it's unimplemented
echo "=== Searching for ToolScope usage in filtering logic ==="
rg -n -C5 'ToolScope' --type=rust -g '!*test*'
echo ""
echo "=== Searching for .scope() method calls on tools ==="
rg -n -C3 '\.scope\(\)' --type=rust -g '!*test*'
echo ""
echo "=== Checking Tool trait definition ==="
ast-grep --pattern 'pub trait Tool {
$$$
}'Repository: tinyhumansai/openhuman
Length of output: 15582
🏁 Script executed:
# Find Tool trait definition and scope usage in execution paths
rg -A 20 'pub trait Tool' --type=rust -g '!*test*'Repository: tinyhumansai/openhuman
Length of output: 4849
🏁 Script executed:
# Search for agent execution and tool usage patterns
rg -n 'tool.*scope|scope.*filter|CliRpcOnly' --type=rust -g '!*test*' -A 3 -B 3Repository: tinyhumansai/openhuman
Length of output: 9403
🏁 Script executed:
# Find where tools are collected/used in agent or skill execution
rg -n 'tools.*push|execute.*tool|call_tool' --type=rust -g '!*test*' -A 2 -B 2Repository: tinyhumansai/openhuman
Length of output: 22228
🏁 Script executed:
# Check agent initialization and how tools are used
rg -n 'fn new|pub fn new' src/openhuman/agent/agent.rs -A 15 | head -100Repository: tinyhumansai/openhuman
Length of output: 587
🏁 Script executed:
# Search for where tools are passed to agent or used in execution
rg -n 'execute_tool_call|\.name\(\)' src/openhuman/agent/agent.rs -B 3 -A 10 | head -100Repository: tinyhumansai/openhuman
Length of output: 1658
🏁 Script executed:
# Look for any scope-based filtering in agent code or tool lookup
rg -n 'tools\[|tools\.iter|for.*tools|find_tool' src/openhuman/agent/agent.rs -A 5Repository: tinyhumansai/openhuman
Length of output: 639
Scope-based filtering for TwilioCallTool is not enforced — the tool will be available to the agent loop.
The scope() method is not part of the Tool trait (see traits.rs), and no filtering occurs during tool registration or execution. At agent.rs:573, the agent looks up tools by name only:
if let Some(tool) = self.tools.iter().find(|t| t.name() == call.name) {There is no scope check. Although TwilioCallTool::scope() returns ToolScope::CliRpcOnly, this restriction has no effect in the current codebase. The tool will be accessible to the autonomous agent loop, contradicting the PR objective.
There was a problem hiding this comment.
🧹 Nitpick comments (5)
src/openhuman/integrations/twilio.rs (1)
39-41: Remove the redundant inherentscope()method.
scope()is defined both as an inherent method (lines 39-41) and in theTooltrait impl (lines 86-88). When the tool is boxed asBox<dyn Tool>, only the trait method is called. The inherent method is dead code that could diverge from the trait impl.♻️ Proposed fix
impl TwilioCallTool { pub fn new(client: Arc<IntegrationClient>) -> Self { Self { client } } - - pub fn scope(&self) -> ToolScope { - ToolScope::CliRpcOnly - } }Also applies to: 86-88
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/twilio.rs` around lines 39 - 41, Remove the redundant inherent method scope() on the Twilio tool: delete the pub fn scope(&self) -> ToolScope { ToolScope::CliRpcOnly } implementation so there is only the Tool trait's scope method used (the one implemented in the Tool impl for the Twilio type); this avoids dead/divergent code when the tool is used as Box<dyn Tool> and ensures the trait implementation (Tool::scope) is the single source of truth.src/openhuman/agent/loop_/tool_loop.rs (1)
300-322: Scope enforcement is correct; consider consolidating the tool lookup.The
CliRpcOnlyscope check correctly prevents restricted tools from running in the autonomous agent loop. However,find_tool()is called twice for the same tool name in quick succession—once at line 301 for the scope check and again at line 322 for execution. This is a minor inefficiency.♻️ Suggested consolidation
- // Scope check: CliRpcOnly tools cannot run in the autonomous agent loop. - if let Some(tool) = find_tool(tools_registry, &call.name) { + let result = if let Some(tool) = find_tool(tools_registry, &call.name) { + // Scope check: CliRpcOnly tools cannot run in the autonomous agent loop. if tool.scope() == ToolScope::CliRpcOnly { tracing::warn!( iteration, tool = call.name.as_str(), "[agent_loop] tool scope is CliRpcOnly — denied in agent loop" ); - let denied = format!( + format!( "Tool '{}' is only available via explicit CLI/RPC invocation, not in the autonomous agent loop.", call.name - ); - individual_results.push(denied.clone()); - let _ = writeln!( - tool_results, - "<tool_result name=\"{}\">\n{denied}\n</tool_result>", - call.name - ); - continue; + ) + } else { + // Execute the tool with timeout... + let tool_deadline = ...; + // ... existing execution logic using `tool` directly } - } - - let result = if let Some(tool) = find_tool(tools_registry, &call.name) { - // ... existing execution logic + } else { + format!("Unknown tool: {}", call.name) };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/loop_/tool_loop.rs` around lines 300 - 322, The code calls find_tool(tools_registry, &call.name) twice; store its return once (e.g., let tool_opt = find_tool(tools_registry, &call.name)) before the scope check and reuse tool_opt for both the ToolScope::CliRpcOnly check and the subsequent execution branch that assigns to result, updating references to tool to match the Option binding and preserving the existing behavior that pushes denied into individual_results and writes to tool_results when scope is CliRpcOnly.src/openhuman/integrations/mod.rs (2)
51-60: Consider makingdataoptional to handle edge cases.If the backend returns
{ "success": true }without adatafield, deserialization will fail. While unlikely with a well-behaved backend, makingdata: Option<T>would be more defensive.♻️ Defensive approach
#[derive(Debug, Deserialize)] pub struct BackendResponse<T> { #[allow(dead_code)] pub success: bool, - pub data: T, + pub data: Option<T>, #[serde(default)] #[allow(dead_code)] pub error: Option<String>, }Then update the helpers:
- Ok(envelope.data) + envelope.data.ok_or_else(|| anyhow::anyhow!("Backend response missing data"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/mod.rs` around lines 51 - 60, Change BackendResponse<T>::data from T to Option<T> to defensively handle responses missing a data field: update the struct BackendResponse<T> definition (keep #[derive(Debug, Deserialize)] and serde attributes) so pub data: Option<T>, and then audit and update all helper functions and callers that consume BackendResponse (e.g., any functions that assume BackendResponse::data is present) to handle the Option by mapping, .and_then(), or returning a clear error when data is None; update unit tests to cover the `{ "success": true }` case as well.
62-182: Consider movingIntegrationClientto a sibling file.The
IntegrationClientimplementation (~120 lines) is substantial operational code. Per coding guidelines, domainmod.rsfiles should remain "light and export-focused" with operational code placed in sibling files likeclient.rsorops.rs.Suggested structure:
src/openhuman/integrations/ ├── mod.rs # module declarations, re-exports, types (pricing, BackendResponse) ├── client.rs # IntegrationClient, build_client() ├── google_places.rs ├── parallel.rs └── twilio.rsAs per coding guidelines: "Keep domain
mod.rsfiles light and export-focused; place operational code in sibling files."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/mod.rs` around lines 62 - 182, The IntegrationClient implementation (struct IntegrationClient and its methods new, post, get, pricing) is large and should be moved out of mod.rs into a sibling file (e.g., client.rs); to fix, create src/openhuman/integrations/client.rs, move the IntegrationClient definition and all its methods (including new, post, get, pricing) into that file, keep any shared types (IntegrationPricing, BackendResponse) and module re-exports in mod.rs, update mod.rs to pub mod client; and ensure any uses/imports adjust paths (use crate::openhuman::integrations::client::IntegrationClient or re-export IntegrationClient from mod.rs) so compilation and visibility remain unchanged.src/openhuman/integrations/parallel.rs (1)
100-102: Remove the redundant inherentscope()methods.Same issue as in
twilio.rs: bothParallelSearchToolandParallelExtractTooldefinescope()as inherent methods, but theTooltrait doesn't overridescope(), so these tools use the defaultToolScope::Allfrom the trait. The inherent methods are unused.♻️ Proposed fix for ParallelSearchTool
impl ParallelSearchTool { pub fn new(client: Arc<IntegrationClient>) -> Self { Self { client } } - - pub fn scope(&self) -> ToolScope { - ToolScope::All - } }♻️ Proposed fix for ParallelExtractTool
impl ParallelExtractTool { pub fn new(client: Arc<IntegrationClient>) -> Self { Self { client } } - - pub fn scope(&self) -> ToolScope { - ToolScope::All - } }Also applies to: 279-281
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/parallel.rs` around lines 100 - 102, The inherent scope() methods on ParallelSearchTool and ParallelExtractTool are redundant because the Tool trait already provides the default ToolScope::All; remove the unused inherent methods named scope() from both structs (the functions named scope in ParallelSearchTool and ParallelExtractTool) so the types rely on the trait default. Ensure no other code references these inherent methods directly; if any call sites exist, switch them to use the Tool trait method (or remove the calls) so compilation and behavior remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/openhuman/agent/loop_/tool_loop.rs`:
- Around line 300-322: The code calls find_tool(tools_registry, &call.name)
twice; store its return once (e.g., let tool_opt = find_tool(tools_registry,
&call.name)) before the scope check and reuse tool_opt for both the
ToolScope::CliRpcOnly check and the subsequent execution branch that assigns to
result, updating references to tool to match the Option binding and preserving
the existing behavior that pushes denied into individual_results and writes to
tool_results when scope is CliRpcOnly.
In `@src/openhuman/integrations/mod.rs`:
- Around line 51-60: Change BackendResponse<T>::data from T to Option<T> to
defensively handle responses missing a data field: update the struct
BackendResponse<T> definition (keep #[derive(Debug, Deserialize)] and serde
attributes) so pub data: Option<T>, and then audit and update all helper
functions and callers that consume BackendResponse (e.g., any functions that
assume BackendResponse::data is present) to handle the Option by mapping,
.and_then(), or returning a clear error when data is None; update unit tests to
cover the `{ "success": true }` case as well.
- Around line 62-182: The IntegrationClient implementation (struct
IntegrationClient and its methods new, post, get, pricing) is large and should
be moved out of mod.rs into a sibling file (e.g., client.rs); to fix, create
src/openhuman/integrations/client.rs, move the IntegrationClient definition and
all its methods (including new, post, get, pricing) into that file, keep any
shared types (IntegrationPricing, BackendResponse) and module re-exports in
mod.rs, update mod.rs to pub mod client; and ensure any uses/imports adjust
paths (use crate::openhuman::integrations::client::IntegrationClient or
re-export IntegrationClient from mod.rs) so compilation and visibility remain
unchanged.
In `@src/openhuman/integrations/parallel.rs`:
- Around line 100-102: The inherent scope() methods on ParallelSearchTool and
ParallelExtractTool are redundant because the Tool trait already provides the
default ToolScope::All; remove the unused inherent methods named scope() from
both structs (the functions named scope in ParallelSearchTool and
ParallelExtractTool) so the types rely on the trait default. Ensure no other
code references these inherent methods directly; if any call sites exist, switch
them to use the Tool trait method (or remove the calls) so compilation and
behavior remain unchanged.
In `@src/openhuman/integrations/twilio.rs`:
- Around line 39-41: Remove the redundant inherent method scope() on the Twilio
tool: delete the pub fn scope(&self) -> ToolScope { ToolScope::CliRpcOnly }
implementation so there is only the Tool trait's scope method used (the one
implemented in the Tool impl for the Twilio type); this avoids dead/divergent
code when the tool is used as Box<dyn Tool> and ensures the trait implementation
(Tool::scope) is the single source of truth.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cb430a64-cc86-4696-8988-8eb8ba161d17
📒 Files selected for processing (8)
src/openhuman/agent/loop_/tool_loop.rssrc/openhuman/integrations/google_places.rssrc/openhuman/integrations/mod.rssrc/openhuman/integrations/parallel.rssrc/openhuman/integrations/twilio.rssrc/openhuman/tools/mod.rssrc/openhuman/tools/ops.rssrc/openhuman/tools/traits.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/openhuman/integrations/google_places.rs
- Updated `tool_loop.rs` to cache the result of `find_tool` for efficiency and clarity in logging. - Removed unused `scope` methods from `GooglePlacesSearchTool`, `ParallelSearchTool`, and `TwilioCallTool`, simplifying the codebase. - Modified `BackendResponse` structure in `mod.rs` to use `Option<T>` for `data`, enhancing error handling when no data is returned from the backend. - Updated related tests to reflect changes in the response structure and ensure proper handling of cases with no data. These changes enhance code maintainability and improve the robustness of the integration tools.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/integrations/mod.rs`:
- Around line 15-207: Move runtime and business logic out of mod.rs by splitting
types and client logic into sibling files: create types.rs to contain the
pricing structs and BackendResponse (IntegrationPricing, PricingIntegrations,
IntegrationPricingEntry, BackendResponse) and export them as pub; create
client.rs to contain the IntegrationClient struct and its impl (new, post, get,
pricing) plus build_client; update mod.rs to only pub mod types; pub mod client;
and re-export pub use
crate::openhuman::integrations::types::{IntegrationPricing, PricingIntegrations,
IntegrationPricingEntry, BackendResponse}; pub use
crate::openhuman::integrations::client::IntegrationClient; pub use
crate::openhuman::integrations::client::build_client so the public API remains
the same (ensure visibility of fields/methods used elsewhere and adjust imports
like reqwest/tokio/anyhow within the new files).
- Around line 196-199: The match arm that currently checks
(config.backend_url.as_deref(), config.auth_token.as_deref()) and only uses
is_empty() should trim whitespace before validation and before constructing the
client: when matching, map the Option<&str> through .map(|s| s.trim()) (or call
.trim() on the deref), ensure the pattern checks that the trimmed url and token
are non-empty, and pass the trimmed owned strings into IntegrationClient::new
(instead of the untrimmed values) so whitespace-only values like " " are
rejected and not used to build the client.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 488d8079-e67a-4739-a526-c9407b008b52
📒 Files selected for processing (5)
src/openhuman/agent/loop_/tool_loop.rssrc/openhuman/integrations/google_places.rssrc/openhuman/integrations/mod.rssrc/openhuman/integrations/parallel.rssrc/openhuman/integrations/twilio.rs
✅ Files skipped from review due to trivial changes (1)
- src/openhuman/agent/loop_/tool_loop.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/openhuman/integrations/twilio.rs
- src/openhuman/integrations/parallel.rs
| "maxResults": max_results | ||
| }); | ||
|
|
||
| tracing::debug!("[google_places_search] query={:?}", query); |
There was a problem hiding this comment.
Avoid logging raw user-provided place search data.
Line 152 logs query verbatim and Line 246 logs place_id verbatim. These fields can be user-sensitive in practice; log only redacted/derived metadata.
Suggested fix
- tracing::debug!("[google_places_search] query={:?}", query);
+ tracing::debug!(
+ "[google_places_search] request query_len={} max_results={}",
+ query.trim().chars().count(),
+ max_results
+ );
- tracing::debug!("[google_places_details] placeId={}", place_id);
+ tracing::debug!(
+ "[google_places_details] request place_id_len={}",
+ place_id.trim().chars().count()
+ );As per coding guidelines, "Never log secrets, API keys, JWTs, credentials, or full PII in Rust debug logs; redact or omit sensitive fields".
Also applies to: 246-246
| match (config.backend_url.as_deref(), config.auth_token.as_deref()) { | ||
| (Some(url), Some(token)) if !url.is_empty() && !token.is_empty() => Some(Arc::new( | ||
| IntegrationClient::new(url.to_owned(), token.to_owned()), | ||
| )), |
There was a problem hiding this comment.
Harden config validation against whitespace-only credentials/URL.
Line 197 accepts " " values because it checks only is_empty(). Trim before validation to avoid constructing a client with unusable config.
Suggested fix
- (Some(url), Some(token)) if !url.is_empty() && !token.is_empty() => Some(Arc::new(
- IntegrationClient::new(url.to_owned(), token.to_owned()),
+ (Some(url), Some(token))
+ if !url.trim().is_empty() && !token.trim().is_empty() => Some(Arc::new(
+ IntegrationClient::new(url.trim().to_owned(), token.trim().to_owned()),
)),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/integrations/mod.rs` around lines 196 - 199, The match arm that
currently checks (config.backend_url.as_deref(), config.auth_token.as_deref())
and only uses is_empty() should trim whitespace before validation and before
constructing the client: when matching, map the Option<&str> through .map(|s|
s.trim()) (or call .trim() on the deref), ensure the pattern checks that the
trimmed url and token are non-empty, and pass the trimmed owned strings into
IntegrationClient::new (instead of the untrimmed values) so whitespace-only
values like " " are rejected and not used to build the client.
- Introduced `IntegrationClient` to manage backend interactions, including methods for POST and GET requests with error handling. - Implemented caching for pricing information fetched from the backend. - Created a new `types` module to define shared types such as `BackendResponse` and `IntegrationPricing`, enhancing code organization and clarity. - Updated `mod.rs` to include the new client and types, ensuring proper module structure. These changes improve the maintainability and functionality of the integration tools by providing a robust client for backend communication.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/openhuman/integrations/parallel.rs (1)
217-219:⚠️ Potential issue | 🟠 MajorRemove raw
objectivefrom logs (PII risk).Line 218 logs free-form user input directly; this can capture full PII in debug logs. Log only derived, non-sensitive metadata (count/length/mode) instead.
🔒 Minimal safe fix
- tracing::info!("[parallel_search] queries={}", queries.len()); - tracing::debug!("[parallel_search] objective={:?}", objective); + tracing::debug!("[parallel_search] queries={} mode={}", queries.len(), mode); + tracing::trace!( + "[parallel_search] objective_len_chars={}", + objective.chars().count() + );As per coding guidelines, "Never log secrets, API keys, JWTs, credentials, or full PII in Rust debug logs; redact or omit sensitive fields".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/parallel.rs` around lines 217 - 219, The debug log in parallel_search prints the raw free-form variable objective which may contain PII; remove or redact that value and instead log only non-sensitive derived metadata (e.g., queries.len(), objective length, mode, or a stable truncated/hash fingerprint) from the objective variable; update the tracing::debug! call in the parallel_search function to stop emitting the full objective and emit only these safe fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/integrations/parallel.rs`:
- Around line 328-343: The runtime currently accepts any-length `urls` even
though `parameters_schema` caps it at 20; add a guard after the empty check to
enforce a max length of 20 (e.g., if urls.len() > 20 { return
Ok(ToolResult::error("urls must contain at most 20 URLs")); }), update the
allocation for `url_strings` accordingly and keep the existing per-item checks
(the loop using `for (i, v) in urls.iter().enumerate()` and the
`ToolResult::error` returns) so callers exceeding the schema limit get a clear
runtime error.
- Around line 169-193: Validate search_queries length and mode before
proceeding: check that search_queries.len() <= 10 and return
ToolResult::error("search_queries cannot contain more than 10 queries") if
exceeded; when iterating (the block populating queries Vec<&str>) keep the
existing per-item checks but perform the overall length check first; validate
mode by reading args.get("mode").and_then(|v| v.as_str()).unwrap_or("fast") and
ensure it equals one of "fast", "one-shot", or "agentic", returning
ToolResult::error(format!("invalid mode: {}", mode)) if not; make these
validations (on search_queries and mode) prior to any backend calls so invalid
inputs are rejected early.
---
Duplicate comments:
In `@src/openhuman/integrations/parallel.rs`:
- Around line 217-219: The debug log in parallel_search prints the raw free-form
variable objective which may contain PII; remove or redact that value and
instead log only non-sensitive derived metadata (e.g., queries.len(), objective
length, mode, or a stable truncated/hash fingerprint) from the objective
variable; update the tracing::debug! call in the parallel_search function to
stop emitting the full objective and emit only these safe fields.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ec2e04c0-5900-4ce5-acca-51755221d60a
📒 Files selected for processing (6)
src/openhuman/integrations/client.rssrc/openhuman/integrations/google_places.rssrc/openhuman/integrations/mod.rssrc/openhuman/integrations/parallel.rssrc/openhuman/integrations/twilio.rssrc/openhuman/integrations/types.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/openhuman/integrations/twilio.rs
- src/openhuman/integrations/google_places.rs
| if search_queries.is_empty() { | ||
| return Ok(ToolResult::error( | ||
| "search_queries must contain at least one query", | ||
| )); | ||
| } | ||
|
|
||
| let mut queries: Vec<&str> = Vec::with_capacity(search_queries.len()); | ||
| for (i, v) in search_queries.iter().enumerate() { | ||
| match v.as_str() { | ||
| Some(s) if !s.trim().is_empty() => queries.push(s), | ||
| Some(_) => { | ||
| return Ok(ToolResult::error(format!( | ||
| "search_queries[{i}] is an empty string" | ||
| ))); | ||
| } | ||
| None => { | ||
| return Ok(ToolResult::error(format!( | ||
| "search_queries[{i}] is not a string" | ||
| ))); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| let mode = args.get("mode").and_then(|v| v.as_str()).unwrap_or("fast"); | ||
|
|
There was a problem hiding this comment.
Enforce declared search_queries/mode constraints before calling backend.
The schema says search_queries max is 10 and mode must be one of fast|one-shot|agentic, but runtime currently accepts oversized arrays and arbitrary mode strings.
✅ Suggested validation patch
if search_queries.is_empty() {
return Ok(ToolResult::error(
"search_queries must contain at least one query",
));
}
+ if search_queries.len() > 10 {
+ return Ok(ToolResult::error(
+ "search_queries must contain at most 10 queries",
+ ));
+ }
@@
- let mode = args.get("mode").and_then(|v| v.as_str()).unwrap_or("fast");
+ let mode = args.get("mode").and_then(|v| v.as_str()).unwrap_or("fast");
+ if !matches!(mode, "fast" | "one-shot" | "agentic") {
+ return Ok(ToolResult::error(
+ "mode must be one of: fast, one-shot, agentic",
+ ));
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/integrations/parallel.rs` around lines 169 - 193, Validate
search_queries length and mode before proceeding: check that
search_queries.len() <= 10 and return ToolResult::error("search_queries cannot
contain more than 10 queries") if exceeded; when iterating (the block populating
queries Vec<&str>) keep the existing per-item checks but perform the overall
length check first; validate mode by reading args.get("mode").and_then(|v|
v.as_str()).unwrap_or("fast") and ensure it equals one of "fast", "one-shot", or
"agentic", returning ToolResult::error(format!("invalid mode: {}", mode)) if
not; make these validations (on search_queries and mode) prior to any backend
calls so invalid inputs are rejected early.
| if urls.is_empty() { | ||
| return Ok(ToolResult::error("urls must contain at least one URL")); | ||
| } | ||
|
|
||
| let mut url_strings: Vec<&str> = Vec::with_capacity(urls.len()); | ||
| for (i, v) in urls.iter().enumerate() { | ||
| match v.as_str() { | ||
| Some(s) if !s.trim().is_empty() => url_strings.push(s), | ||
| Some(_) => { | ||
| return Ok(ToolResult::error(format!("urls[{i}] is an empty string"))); | ||
| } | ||
| None => { | ||
| return Ok(ToolResult::error(format!("urls[{i}] is not a string"))); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Add runtime max bound for urls to match schema contract.
parameters_schema caps urls at 20, but execution path currently accepts larger arrays.
✅ Suggested validation patch
if urls.is_empty() {
return Ok(ToolResult::error("urls must contain at least one URL"));
}
+ if urls.len() > 20 {
+ return Ok(ToolResult::error("urls must contain at most 20 URLs"));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if urls.is_empty() { | |
| return Ok(ToolResult::error("urls must contain at least one URL")); | |
| } | |
| let mut url_strings: Vec<&str> = Vec::with_capacity(urls.len()); | |
| for (i, v) in urls.iter().enumerate() { | |
| match v.as_str() { | |
| Some(s) if !s.trim().is_empty() => url_strings.push(s), | |
| Some(_) => { | |
| return Ok(ToolResult::error(format!("urls[{i}] is an empty string"))); | |
| } | |
| None => { | |
| return Ok(ToolResult::error(format!("urls[{i}] is not a string"))); | |
| } | |
| } | |
| } | |
| if urls.is_empty() { | |
| return Ok(ToolResult::error("urls must contain at least one URL")); | |
| } | |
| if urls.len() > 20 { | |
| return Ok(ToolResult::error("urls must contain at most 20 URLs")); | |
| } | |
| let mut url_strings: Vec<&str> = Vec::with_capacity(urls.len()); | |
| for (i, v) in urls.iter().enumerate() { | |
| match v.as_str() { | |
| Some(s) if !s.trim().is_empty() => url_strings.push(s), | |
| Some(_) => { | |
| return Ok(ToolResult::error(format!("urls[{i}] is an empty string"))); | |
| } | |
| None => { | |
| return Ok(ToolResult::error(format!("urls[{i}] is not a string"))); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/integrations/parallel.rs` around lines 328 - 343, The runtime
currently accepts any-length `urls` even though `parameters_schema` caps it at
20; add a guard after the empty check to enforce a max length of 20 (e.g., if
urls.len() > 20 { return Ok(ToolResult::error("urls must contain at most 20
URLs")); }), update the allocation for `url_strings` accordingly and keep the
existing per-item checks (the loop using `for (i, v) in urls.iter().enumerate()`
and the `ToolResult::error` returns) so callers exceeding the schema limit get a
clear runtime error.
Summary
src/openhuman/integrations/module with 5 backend-proxied agent toolsGET /agent-integrations/pricingbackend endpoint and cached viaOnceCellIntegrationsConfigwith per-integration toggles,ToolScopeenum (All,AgentOnly,CliRpcOnly) for future scope-based filteringFiles changed
src/openhuman/integrations/— new module (4 files, 1434 lines)src/openhuman/config/schema/tools.rs—IntegrationsConfig,IntegrationTogglesrc/openhuman/config/schema/types.rs— addedintegrationsfield toConfigsrc/openhuman/tools/ops.rs— conditional tool registrationconfig/mod.rsandconfig/schema/mod.rsTest plan
cargo checkpassescargo test -- integrations— all 33 tests pass[integrations]in config.toml and verify tools appear in agent tool registryparallel_searchthrough agent, verify backend receives correct auth + paramstwilio_callis excluded from autonomous agent loop (CLI/RPC only scope)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Behavior Changes