fix(agent): inject skill tools into agent registry and unify ToolResult (#341)#360
Conversation
…lt (tinyhumansai#341) Skill tools (Notion, Gmail, etc.) from the QuickJS runtime were invisible to the agent's tool loop — the LLM could never call them. This change: 1. Adds SkillToolBridge to wrap skill ToolDefinitions as Tool trait impls and inject them into the agent's tool registry via collect_skill_tools(). 2. Unifies ToolResult across built-in and skill tools — both now use the MCP-spec type (content blocks + is_error) from skills::types, eliminating the need for result conversion between the two systems. 3. Adds convenience constructors (ToolResult::success/error/json) and accessor methods (output/text) to simplify all tool implementations. 4. Adds diagnostic logging in the tool loop for tool registry contents and tool lookup results. Verified: agent_chat RPC successfully called Notion list-all-pages tool and returned real workspace data through the agent loop. Closes tinyhumansai#341 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR centralizes ToolResult into a unified type with constructors/accessors, migrates nearly all tools and tests to that API, adds a QuickJS skill bridge that exposes skill tools as namespaced Tool implementations, and integrates collected skill tools into the agent tool registry and runtime flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent Loop
participant Bridge as Skill Bridge
participant Runtime as RuntimeEngine
participant SkillTool as SkillTool (namespaced)
Agent->>Bridge: collect_skill_tools()
activate Bridge
Bridge->>Runtime: all_tools()
activate Runtime
Runtime-->>Bridge: Vec<(skill_id, ToolDefinition)>
deactivate Runtime
Bridge-->>Agent: Vec<Box<dyn Tool>>
deactivate Bridge
Agent->>Agent: extend tools_registry (with SkillTool)
Agent->>SkillTool: execute(args)
activate SkillTool
SkillTool->>Runtime: call_tool(skill_id, tool_name, args)
activate Runtime
Runtime-->>SkillTool: ToolResult
deactivate Runtime
SkillTool-->>Agent: ToolResult (uses is_error/output)
deactivate SkillTool
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (7)
src/openhuman/skills/types.rs (1)
194-246: Add direct unit coverage for the newToolResulthelper surface.These helpers are now central to both built-in and skill tool flows; adding focused tests here will harden regression safety for
is_error,text(), andoutput()behavior.💡 Suggested test additions
#[cfg(test)] mod tests { use super::*; @@ fn unknown_state_falls_through_to_connecting() { let st = state_with(&[("connection_status", "unknown_value")]); assert_eq!( derive_connection_status(SkillStatus::Running, true, &st), "connecting" ); } + + #[test] + fn tool_result_helpers_set_error_flag_and_text() { + let ok = ToolResult::success("ok"); + assert!(!ok.is_error); + assert_eq!(ok.text(), "ok"); + + let err = ToolResult::error("boom"); + assert!(err.is_error); + assert_eq!(err.text(), "boom"); + } + + #[test] + fn tool_result_text_ignores_json_content() { + let result = ToolResult { + content: vec![ + ToolContent::Text { text: "line1".into() }, + ToolContent::Json { + data: serde_json::json!({"k":"v"}), + }, + ToolContent::Text { text: "line2".into() }, + ], + is_error: false, + }; + assert_eq!(result.text(), "line1\nline2"); + } + + #[test] + fn tool_result_output_includes_json_content() { + let result = ToolResult::json(serde_json::json!({"k":"v"})); + let out = result.output(); + assert!(out.contains("\"k\"")); + assert!(out.contains("\"v\"")); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/skills/types.rs` around lines 194 - 246, Add unit tests covering ToolResult's helper constructors and accessors: create tests that assert ToolResult::success(...) produces is_error == false and that text() and output() return the provided text; that ToolResult::error(...) sets is_error == true and text()/output() contain the error message; and that ToolResult::json(...) produces is_error == false and output() returns pretty-printed JSON while text() returns an empty string. Use the ToolResult::success, ToolResult::error, ToolResult::json, .is_error, .text(), and .output() symbols to locate code and include mixed-content cases (vec![ToolContent::Text{...}, ToolContent::Json{...}]) to validate join ordering and newline behavior.src/openhuman/agent/harness/self_healing.rs (1)
49-54: Redundant variable assignment on line 54.The variable
combinedis assigned directly fromoutput_textwithout modification. This appears to be leftover from previous code that may have combined multiple fields.♻️ Suggested simplification
- let output_text = result.output().to_lowercase(); - let combined = output_text; + let combined = result.output().to_lowercase();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/harness/self_healing.rs` around lines 49 - 54, The variable `combined` in self_healing.rs is redundant because it's assigned directly from `output_text` in the `if !self.enabled || !result.is_error` branch; remove the `combined` assignment and update any subsequent uses to refer to `output_text` (or, if the original intent was to merge multiple fields, implement that merging in place). Edit the block around the `result.output().to_lowercase()` call and eliminate `combined` (or replace its creation with the actual combination logic) to simplify the code and avoid the unused/duplicate variable.src/openhuman/agent/loop_/session.rs (1)
497-505: Consider extracting duplicated skill bridge logic.This block is identical to lines 82-90 in
run(), differing only in the log message ("channel" vs "session"). Consider extracting to a helper function to reduce duplication.♻️ Suggested helper extraction
// In a shared location (e.g., top of file or a helpers module) fn inject_skill_tools(tools_registry: &mut Vec<Box<dyn Tool>>, context: &str) { let skill_tools = tools::skill_bridge::collect_skill_tools(); if !skill_tools.is_empty() { tracing::info!( count = skill_tools.len(), "[skill-bridge] Skill tools added to {context}" ); tools_registry.extend(skill_tools); } } // Usage in run(): inject_skill_tools(&mut tools_registry, "session"); // Usage in process_message(): inject_skill_tools(&mut tools_registry, "channel");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/loop_/session.rs` around lines 497 - 505, The duplicated block that collects and injects skill tools using tools::skill_bridge::collect_skill_tools() and extends tools_registry appears in both run() and the session processing block; extract this into a helper like inject_skill_tools(tools_registry: &mut Vec<Box<dyn Tool>>, context: &str) that calls collect_skill_tools(), logs using the provided context, and extends tools_registry; then replace the duplicated code in run() and the other function (e.g., process_message() / session handling) with calls to inject_skill_tools(&mut tools_registry, "session") or inject_skill_tools(&mut tools_registry, "channel") as appropriate.src/openhuman/agent/loop_/tool_loop.rs (1)
291-291: Consider reusing the lookup result to avoid redundantfind_toolcall.
find_toolis called at line 291 solely for thetool_founddiagnostic flag, then called again at line 299 for execution. While harmless, this could be consolidated.♻️ Proposed optimization
- let tool_found = find_tool(tools_registry, &call.name).is_some(); + let tool_opt = find_tool(tools_registry, &call.name); + let tool_found = tool_opt.is_some(); tracing::debug!( iteration, tool = call.name.as_str(), found = tool_found, "[agent_loop] executing tool" ); - let result = if let Some(tool) = find_tool(tools_registry, &call.name) { + let result = if let Some(tool) = tool_opt {Also applies to: 295-295, 299-299
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/loop_/tool_loop.rs` at line 291, The code redundantly calls find_tool(tools_registry, &call.name) twice; change it to call find_tool once, store the Option result in a local variable (e.g., let tool_opt = find_tool(tools_registry, &call.name)), use tool_opt.is_some() for the tool_found diagnostic, and then reuse tool_opt (unwrapping or pattern-matching) when invoking the tool execution path so you don't perform the lookup twice; update any adjacent references around tool_found and the execution branch to use this stored variable.src/openhuman/tools/file_read.rs (1)
168-170: Remove duplicate assertion.Line 170 duplicates the assertion at line 168. The second
assert!(!result.is_error)is redundant.♻️ Proposed fix
assert!(!result.is_error); assert_eq!(result.output(), "hello world"); - assert!(!result.is_error);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/file_read.rs` around lines 168 - 170, The test contains a duplicate assertion on the same boolean (two occurrences of assert!(!result.is_error)); remove the redundant assertion so only a single assert!(!result.is_error) remains alongside assert_eq!(result.output(), "hello world"), locating the change where `result` is asserted in file_read.rs (the block with `assert_eq!(result.output(), "hello world")`).src/openhuman/tools/http_request.rs (1)
256-260: HTTP error responses lose detailed output.When the HTTP status indicates failure, only the status code is returned (
"HTTP {status_code}"), but the fulloutputcontaining headers and body is discarded. This could make debugging API failures harder since error responses often contain useful information (e.g., validation errors, rate limit details).Consider including the response details in error results:
♻️ Proposed fix to preserve response details
if status.is_success() { Ok(ToolResult::success(output)) } else { - Ok(ToolResult::error(format!("HTTP {}", status_code))) + Ok(ToolResult::error(output)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/http_request.rs` around lines 256 - 260, The error branch currently discards response details and only returns Ok(ToolResult::error(format!("HTTP {}", status_code)))—update the non-success path in the function handling the HTTP response to include the full response output (headers/body) alongside the status code so callers can inspect diagnostic details; specifically change the else branch that builds ToolResult::error to incorporate the existing output variable (e.g., include output and status_code in the error message or attach it to the ToolResult) so that status, status_code, and output are preserved when status.is_success() is false.src/openhuman/tools/git_operations.rs (1)
197-200: Remove duplicate assertion.Line 199 duplicates the assertion on line 197. This appears to be a copy-paste artifact.
🧹 Proposed fix
assert!(!result.is_error); assert!(result.output().trim().contains("hello")); - assert!(!result.is_error); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/git_operations.rs` around lines 197 - 200, Remove the duplicated assertion near the end of the hunk-building block: delete the second (copy-pasted) assert that duplicates the earlier one around the lines where current_hunk is pushed into hunks (the duplicate refers to the same condition already asserted before hunks.push and uses current_hunk); leave the original assertion in place, rebuild to ensure the code compiles and tests pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/tools/cron_run.rs`:
- Around line 77-87: The failure branch currently returns a generic
ToolResult::error and discards result_output (which contains job_id, status,
duration_ms, and output); change the error path in the cron_run code so that
when success is false it returns the serialized result_output (e.g., wrap
result_output in ToolResult::error or otherwise include it in the ToolResult)
instead of throwing away the details — update the branch that returns
ToolResult::error("cron job execution failed".to_string()) to return the
detailed result_output tied to the job (referencing result_output, success,
job.id, and ToolResult::error/ToolResult::success).
In `@src/openhuman/tools/delegate.rs`:
- Around line 256-260: The format string in the ToolResult::success call builds
the delegate header with mismatched brackets ("({}/{}])"); update the format
string used in that call (referencing ToolResult::success, agent_name,
providers::INFERENCE_BACKEND_ID, agent_config.model, and rendered) so the
brackets match — e.g. replace "({}/{}])" with "({}/{})]" so the header becomes
"[Agent '{agent_name}' ({}/{})]\n{rendered}".
In `@src/openhuman/tools/run_linter.rs`:
- Around line 114-120: The linter failure branch currently only returns the exit
code; change the else branch that calls Ok(ToolResult::error(...)) to include
the linter output stored in combined (and optionally the exit code) so
actionable diagnostics are returned; locate the block using
output.status.success(), output.status.code(), combined and ToolResult::error
and update the formatted error message to include combined (e.g. "Linter exited
with code {:?}: {}") instead of dropping the combined contents.
In `@src/openhuman/tools/run_tests.rs`:
- Around line 153-160: The error branch currently discards the test output and
only returns the exit code; update the else path in the function that constructs
the ToolResult (where output, truncated, and output.status.success() are used)
to include the truncated test output in the error message (e.g., combine "Tests
exited with code ..." with truncated) so ToolResult::error contains the actual
test failure details rather than just the exit code.
In `@src/openhuman/tools/skill_bridge.rs`:
- Around line 38-39: The current construction of namespaced_name via
format!("{}__{}", skill_id, tool_def.name) is collision-prone when either part
contains "__" and the collector never rejects duplicates; change the naming and
insertion logic: compute a collision-safe key (e.g., escape/encode skill_id and
tool_def.name or use a tuple/hash of (skill_id, tool_def.name) instead of a raw
string) when creating namespaced_name, and before pushing into the Vec<Box<dyn
Tool>> registry check for an existing entry using that stable key; on duplicate
either return an error or skip insertion with a clear log message so
register/collector (where namespaced_name is set and Self is constructed) cannot
shadow existing tools.
- Around line 106-114: The handler currently returns raw error details via
ToolResult::error(err) which can leak internal/ sensitive info; instead keep the
full err in log::error!(...) and return a sanitized, user-safe message to the
model/tool output. Modify the Err branch around self.skill_id and self.tool_name
so you still log the detailed err (using log::error! as-is) but call
ToolResult::error with a generic message (e.g., "Execution failed for
{skill}.{tool}" or "Internal error executing tool") rather than passing the
original err value.
---
Nitpick comments:
In `@src/openhuman/agent/harness/self_healing.rs`:
- Around line 49-54: The variable `combined` in self_healing.rs is redundant
because it's assigned directly from `output_text` in the `if !self.enabled ||
!result.is_error` branch; remove the `combined` assignment and update any
subsequent uses to refer to `output_text` (or, if the original intent was to
merge multiple fields, implement that merging in place). Edit the block around
the `result.output().to_lowercase()` call and eliminate `combined` (or replace
its creation with the actual combination logic) to simplify the code and avoid
the unused/duplicate variable.
In `@src/openhuman/agent/loop_/session.rs`:
- Around line 497-505: The duplicated block that collects and injects skill
tools using tools::skill_bridge::collect_skill_tools() and extends
tools_registry appears in both run() and the session processing block; extract
this into a helper like inject_skill_tools(tools_registry: &mut Vec<Box<dyn
Tool>>, context: &str) that calls collect_skill_tools(), logs using the provided
context, and extends tools_registry; then replace the duplicated code in run()
and the other function (e.g., process_message() / session handling) with calls
to inject_skill_tools(&mut tools_registry, "session") or inject_skill_tools(&mut
tools_registry, "channel") as appropriate.
In `@src/openhuman/agent/loop_/tool_loop.rs`:
- Line 291: The code redundantly calls find_tool(tools_registry, &call.name)
twice; change it to call find_tool once, store the Option result in a local
variable (e.g., let tool_opt = find_tool(tools_registry, &call.name)), use
tool_opt.is_some() for the tool_found diagnostic, and then reuse tool_opt
(unwrapping or pattern-matching) when invoking the tool execution path so you
don't perform the lookup twice; update any adjacent references around tool_found
and the execution branch to use this stored variable.
In `@src/openhuman/skills/types.rs`:
- Around line 194-246: Add unit tests covering ToolResult's helper constructors
and accessors: create tests that assert ToolResult::success(...) produces
is_error == false and that text() and output() return the provided text; that
ToolResult::error(...) sets is_error == true and text()/output() contain the
error message; and that ToolResult::json(...) produces is_error == false and
output() returns pretty-printed JSON while text() returns an empty string. Use
the ToolResult::success, ToolResult::error, ToolResult::json, .is_error,
.text(), and .output() symbols to locate code and include mixed-content cases
(vec![ToolContent::Text{...}, ToolContent::Json{...}]) to validate join ordering
and newline behavior.
In `@src/openhuman/tools/file_read.rs`:
- Around line 168-170: The test contains a duplicate assertion on the same
boolean (two occurrences of assert!(!result.is_error)); remove the redundant
assertion so only a single assert!(!result.is_error) remains alongside
assert_eq!(result.output(), "hello world"), locating the change where `result`
is asserted in file_read.rs (the block with `assert_eq!(result.output(), "hello
world")`).
In `@src/openhuman/tools/git_operations.rs`:
- Around line 197-200: Remove the duplicated assertion near the end of the
hunk-building block: delete the second (copy-pasted) assert that duplicates the
earlier one around the lines where current_hunk is pushed into hunks (the
duplicate refers to the same condition already asserted before hunks.push and
uses current_hunk); leave the original assertion in place, rebuild to ensure the
code compiles and tests pass.
In `@src/openhuman/tools/http_request.rs`:
- Around line 256-260: The error branch currently discards response details and
only returns Ok(ToolResult::error(format!("HTTP {}", status_code)))—update the
non-success path in the function handling the HTTP response to include the full
response output (headers/body) alongside the status code so callers can inspect
diagnostic details; specifically change the else branch that builds
ToolResult::error to incorporate the existing output variable (e.g., include
output and status_code in the error message or attach it to the ToolResult) so
that status, status_code, and output are preserved when status.is_success() is
false.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7276c13b-f796-4383-b3cf-0138e49dd413
📒 Files selected for processing (50)
src/openhuman/agent/agent.rssrc/openhuman/agent/harness/self_healing.rssrc/openhuman/agent/loop_/session.rssrc/openhuman/agent/loop_/tool_loop.rssrc/openhuman/agent/prompt.rssrc/openhuman/agent/tests.rssrc/openhuman/agent/traits.rssrc/openhuman/channels/tests/common.rssrc/openhuman/skills/types.rssrc/openhuman/tools/ask_clarification.rssrc/openhuman/tools/browser.rssrc/openhuman/tools/browser_open.rssrc/openhuman/tools/composio.rssrc/openhuman/tools/cron_add.rssrc/openhuman/tools/cron_list.rssrc/openhuman/tools/cron_remove.rssrc/openhuman/tools/cron_run.rssrc/openhuman/tools/cron_runs.rssrc/openhuman/tools/cron_update.rssrc/openhuman/tools/delegate.rssrc/openhuman/tools/file_read.rssrc/openhuman/tools/file_write.rssrc/openhuman/tools/git_operations.rssrc/openhuman/tools/hardware_board_info.rssrc/openhuman/tools/hardware_memory_map.rssrc/openhuman/tools/hardware_memory_read.rssrc/openhuman/tools/http_request.rssrc/openhuman/tools/image_info.rssrc/openhuman/tools/insert_sql_record.rssrc/openhuman/tools/local_cli.rssrc/openhuman/tools/memory_forget.rssrc/openhuman/tools/memory_recall.rssrc/openhuman/tools/memory_store.rssrc/openhuman/tools/mod.rssrc/openhuman/tools/ops.rssrc/openhuman/tools/proxy_config.rssrc/openhuman/tools/pushover.rssrc/openhuman/tools/read_diff.rssrc/openhuman/tools/run_linter.rssrc/openhuman/tools/run_tests.rssrc/openhuman/tools/schedule.rssrc/openhuman/tools/screenshot.rssrc/openhuman/tools/shell.rssrc/openhuman/tools/skill_bridge.rssrc/openhuman/tools/spawn_subagent.rssrc/openhuman/tools/tool_stats.rssrc/openhuman/tools/traits.rssrc/openhuman/tools/update_memory_md.rssrc/openhuman/tools/web_search_tool.rssrc/openhuman/tools/workspace_state.rs
| Err(err) => { | ||
| log::error!( | ||
| "[skill-bridge] {}.{} execution failed: {}", | ||
| self.skill_id, | ||
| self.tool_name, | ||
| err | ||
| ); | ||
| Ok(ToolResult::error(err)) | ||
| } |
There was a problem hiding this comment.
Avoid returning raw runtime error details to model-facing tool output.
ToolResult::error(err) may expose internal exception text (and potentially sensitive context) to downstream model/tool transcripts. Keep detailed diagnostics in logs; return a sanitized user-safe message.
Suggested change
Err(err) => {
log::error!(
"[skill-bridge] {}.{} execution failed: {}",
self.skill_id,
self.tool_name,
err
);
- Ok(ToolResult::error(err))
+ Ok(ToolResult::error(format!(
+ "Skill tool execution failed: {}__{}",
+ self.skill_id, self.tool_name
+ )))
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tools/skill_bridge.rs` around lines 106 - 114, The handler
currently returns raw error details via ToolResult::error(err) which can leak
internal/ sensitive info; instead keep the full err in log::error!(...) and
return a sanitized, user-safe message to the model/tool output. Modify the Err
branch around self.skill_id and self.tool_name so you still log the detailed err
(using log::error! as-is) but call ToolResult::error with a generic message
(e.g., "Execution failed for {skill}.{tool}" or "Internal error executing tool")
rather than passing the original err value.
- cron_run: preserve execution details (result_output) on failure - delegate: fix mismatched bracket in agent response header format string - run_linter: include lint diagnostics in error output, not just exit code - run_tests: include test output in error result, not just exit code - skill_bridge: prevent namespaced tool-name collisions with dedup + __ delimiter validation; sanitize runtime errors in model-facing output Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/openhuman/tools/skill_bridge.rs (1)
175-193: Consider adding tests for collision detection and schema fallback.The current tests verify basic functionality, but coverage could be improved for:
- Delimiter rejection (tools with
"__"in names should be skipped)- Duplicate rejection (second tool with same namespaced name should be skipped)
parameters_schema()fallback when input_schema is null or emptyThese would help prevent regressions in the validation logic.
Example test cases
#[test] fn delimiter_in_name_rejected() { // Would require mocking the engine or testing the validation logic directly let skill_id = "notion__extra"; let tool_name = "search"; assert!(skill_id.contains("__")); // This name should be rejected } #[test] fn parameters_schema_fallback() { use serde_json::json; // Test that null schema gets a valid fallback let null_schema = serde_json::Value::Null; let empty_schema = json!({}); // Both should trigger fallback to { "type": "object", "properties": {} } assert!(null_schema.is_null()); assert_eq!(empty_schema, json!({})); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/skill_bridge.rs` around lines 175 - 193, Add unit tests to cover delimiter rejection, duplicate rejection, and parameters_schema fallback: write tests that exercise the validation path used by collect_skill_tools() (or isolate and call its underlying validation helper if the global engine is hard to mock) to assert tools whose skill_id or tool_name contain "__" are skipped, that a second tool producing the same namespaced name is rejected, and that parameters_schema() returns the fallback schema when given serde_json::Value::Null or an empty object; reference the existing namespaced_name_format and collect_skill_tools tests as templates and assert expected skipped/kept outcomes and the exact fallback schema value { "type": "object", "properties": {} } for null/empty inputs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/openhuman/tools/skill_bridge.rs`:
- Around line 175-193: Add unit tests to cover delimiter rejection, duplicate
rejection, and parameters_schema fallback: write tests that exercise the
validation path used by collect_skill_tools() (or isolate and call its
underlying validation helper if the global engine is hard to mock) to assert
tools whose skill_id or tool_name contain "__" are skipped, that a second tool
producing the same namespaced name is rejected, and that parameters_schema()
returns the fallback schema when given serde_json::Value::Null or an empty
object; reference the existing namespaced_name_format and collect_skill_tools
tests as templates and assert expected skipped/kept outcomes and the exact
fallback schema value { "type": "object", "properties": {} } for null/empty
inputs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a914c528-f35f-4121-bf47-ef34a6d0ce69
📒 Files selected for processing (5)
src/openhuman/tools/cron_run.rssrc/openhuman/tools/delegate.rssrc/openhuman/tools/run_linter.rssrc/openhuman/tools/run_tests.rssrc/openhuman/tools/skill_bridge.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- src/openhuman/tools/cron_run.rs
- src/openhuman/tools/delegate.rs
- src/openhuman/tools/run_linter.rs
Summary
SkillToolBridgeto wrap skillToolDefinitions asTooltrait impls and inject them into the agent registry at all entry points (Agent::from_config, session, channels).ToolResultacross the codebase — built-in tools and skill tools now share a single MCP-specToolResulttype (content: Vec<ToolContent>, is_error: bool), eliminating the need for result conversion between the two systems. Added convenience constructors (success/error/json) and accessor methods (output/text). Net -660 lines.Test plan
cargo checkpasses (0 errors)cargo test --lib— 1919 passed, 11 failed (all 11 pre-existing on main, platform-specific)cargo fmtcleanopenhuman.agent_chatRPC with "list my Notion pages" returned 20 real workspace pages through the agent tool loopCloses #341
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements