feat: improve sub-agent tooling, conversation timeline UX, and Tauri setup#646
feat: improve sub-agent tooling, conversation timeline UX, and Tauri setup#646senamakel merged 23 commits intotinyhumansai:mainfrom
Conversation
… integration - Modified `package.json` scripts to consistently export `CEF_PATH` for all `cargo tauri` commands, ensuring a unified CEF binary distribution location. - Removed the overlay window configuration from `tauri.conf.json` and updated related Rust functions to reflect this change, while retaining helper functions for potential future use. - Updated documentation in `install.md` to clarify the importance of setting `CEF_PATH` for consistent CEF integration across builds. - Enhanced the `ensure-tauri-cli.sh` script to set `CEF_PATH` and ensure proper installation of the vendored CEF-aware `tauri-cli`. These changes streamline the development workflow and improve the reliability of CEF integration in the application.
… and helper apps - Introduced a new `codesign_hardened` function to streamline the signing process with consistent options. - Improved the signing logic for nested frameworks and helper applications, ensuring all binaries are signed correctly. - Updated output messages for better clarity during the signing process, including detailed listings of bundle contents. - Disabled the summarizer payload threshold in the configuration to prevent recursive invocations until the issue is resolved. These changes improve the reliability and maintainability of the macOS signing and notarization workflow.
- Introduced the `current_time` tool to provide the current date and time in UTC and local time zones, facilitating scheduling and reminders. - Updated `agent.toml` to include new tools: `current_time`, `cron_add`, `cron_list`, `cron_remove`, and `schedule`, enhancing the orchestrator's functionality. - Expanded documentation in `prompt.md` to guide users on utilizing the new direct tools effectively. These changes improve the orchestrator's ability to handle time-related queries and scheduling tasks directly, enhancing user experience.
… HTML to markdown - Added a new `post_process` module specifically for Gmail, which modifies action responses to convert HTML content into markdown format, improving usability and reducing context token usage. - Enhanced the `ComposioProvider` trait with a `post_process_action_result` method to allow providers to handle response modifications. - Introduced a `post_process` function that checks for a `raw_html` flag in the arguments to determine whether to apply the conversion. - Implemented tests to validate the HTML detection and conversion logic, ensuring the integrity of the post-processing functionality. These changes enhance the handling of Gmail responses, making them more suitable for further processing and display in the application.
…budget - Enhanced the `post_process` module for Gmail to handle large HTML payloads more efficiently, implementing a fallback mechanism for oversized content. - Updated the `DEFAULT_TOOL_RESULT_BUDGET_BYTES` to `0`, disabling the budget temporarily while reworking the oversized-output path. - Refined the `extract_markdown_body` function to better manage HTML content, ensuring cleaner markdown output and improved performance. - Added utility functions for stripping HTML noise and handling large email bodies, enhancing the overall robustness of the email processing logic. These changes optimize the handling of Gmail responses, improving usability and performance in processing large HTML content.
…nt messages - Added a new `generateTitleIfNeeded` function in the `threadApi` to create a thread title based on the first user message and the assistant's reply. - Introduced a new `GenerateConversationThreadTitleRequest` struct to handle requests for title generation. - Updated the `ChatRuntimeProvider` to dispatch the title generation action after processing inference responses. - Enhanced the `threadSlice` with a new async thunk for generating thread titles, ensuring proper error handling and thread loading. - Added tests for the new title generation functionality to validate the integration with the threads RPC. These changes improve the user experience by automatically generating relevant thread titles, enhancing the organization of conversations.
- Implemented `update_thread_title` method in `ConversationStore` to allow updating the title of existing conversation threads. - Added a corresponding public function `update_thread_title` for external access. - Enhanced tests to verify that thread titles are correctly updated and persisted in the store. These changes improve the management of conversation threads by enabling dynamic title updates, enhancing user experience and organization.
- Introduced a new document detailing the runtime flow of the agent harness, including execution paths for main agents and tools. - Explained the differences between typed and fork subagents, and provided guidance for debugging harness and delegation issues. - Included a file map outlining key components and their roles within the Rust implementation. - Added a flow diagram to visually represent the interaction between agents, tools, and subagents. These changes enhance the understanding of the agent architecture and improve the documentation for developers working with the system.
…ized results - Introduced `extract_from_result` tool to allow targeted queries against oversized tool outputs, improving efficiency by directly interacting with the extraction model. - Added `ResultHandoffCache` to manage oversized payloads, enabling progressive disclosure and reducing context length issues in sub-agent history. - Implemented hygiene helpers for cleaning tool outputs before caching, ensuring only relevant data is stored. - Enhanced the sub-agent runner with new modules for tool preparation and execution, streamlining the overall agent workflow. These changes enhance the sub-agent's ability to handle large tool results effectively, improving performance and user experience.
…try formatting - Introduced new utility functions for parsing and rendering agent messages, including `splitAgentMessageIntoBubbles` and `parseMarkdownTable`, to improve the display of messages in conversation threads. - Implemented `BubbleMarkdown` and `TableCellMarkdown` components for better formatting of user and agent messages, ensuring consistent styling and interaction. - Enhanced the `formatTimelineEntry` function to provide clearer titles and details for tool timeline entries, improving the user experience during interactions with subagents. - Updated the `ChatRuntimeProvider` to utilize the new formatting functions, ensuring that tool timeline entries are displayed with relevant context and detail. These changes improve the overall presentation and usability of conversation messages and tool interactions, enhancing user engagement and clarity.
- Introduced a filter to prevent sub-agents from invoking their own spawning tools, specifically `spawn_subagent` and `delegate_*`, to avoid recursion issues and ensure proper delegation by the top-level orchestrator. - Updated the `is_subagent_spawn_tool` function to identify these tools and integrated checks in both `run_typed_mode` and `run_fork_mode` to maintain the integrity of the sub-agent execution environment. - Enhanced logging to track the removal of restricted tools from the sub-agent's tool surface, improving observability and debugging capabilities. These changes strengthen the sub-agent architecture by enforcing strict boundaries on tool invocation, enhancing stability and performance.
…line entry formatting - Introduced the `ToolTimelineBlock` component to display tool timeline entries with improved formatting and user interaction, including auto-expansion for running entries. - Enhanced the `formatTimelineEntry` function to include user-friendly titles for specific tool actions, such as 'Viewing your Integrations'. - Updated the rendering logic in the `Conversations` component to filter and display visible messages more effectively, improving user experience during conversations. These changes enhance the clarity and usability of tool interactions within conversation threads, providing users with better context and engagement.
…matting consistency - Consolidated import statements in `Conversations.tsx` and `ChatRuntimeProvider.tsx` for improved readability. - Refactored the `ToolTimelineBlock` component to simplify its props structure. - Enhanced formatting consistency in the rendering logic of agent message bubbles and timeline entries, ensuring cleaner code and better maintainability. - Updated test cases for `splitAgentMessageIntoBubbles` and `formatTimelineEntry` to reflect formatting changes and ensure accuracy. These changes improve code clarity and maintainability while enhancing the overall user experience in conversation threads.
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (14)
📝 WalkthroughWalkthroughModularizes the subagent runner and adds oversized-result handoff with an extract tool; introduces provider-side Composio post-processing (Gmail); adds thread title generation RPC and frontend wiring; implements agent message bubble parsing and tool-timeline formatting; CEF/Tauri build fixes and macOS signing updates; adds current_time tool and tests. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Frontend as Frontend (Conversations.tsx)
participant ThreadAPI as Thread API (app/threadApi)
participant RPC as RPC Handler (schemas.rs)
participant ThreadOps as Thread Ops (thread_generate_title)
participant Provider as Provider Runtime (LLM)
participant ThreadStore as Conversation Store
User->>Frontend: request title generation (threadId, assistantMessage?)
Frontend->>ThreadAPI: generateTitleIfNeeded(threadId, assistantMessage)
ThreadAPI->>RPC: openhuman.threads_generate_title RPC
RPC->>ThreadOps: thread_generate_title(request)
ThreadOps->>ThreadStore: load thread & messages
ThreadOps->>Provider: provider.chat_with_system(prompt)
Provider-->>ThreadOps: generated title text
ThreadOps->>ThreadStore: update_thread_title(threadId, title)
ThreadStore-->>ThreadOps: updated summary
ThreadOps-->>RPC: RpcOutcome(updated summary)
RPC-->>ThreadAPI: envelope result
ThreadAPI-->>Frontend: updated thread object
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
…eRefresh - Eliminated the duplicate import statement for `requestUsageRefresh` in `ChatRuntimeProvider.tsx`, streamlining the code for better readability and maintainability.
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/openhuman/agent/agents/integrations_agent/prompt.md (2)
37-42:⚠️ Potential issue | 🟠 MajorDon’t tell the model it is returning structured data to the orchestrator.
The subagent boundary only returns the child’s final text, not the underlying dataset. This wording encourages impossible behavior and can produce answers that imply an export/persist step happened when only prose came back from
spawn_subagent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/agents/integrations_agent/prompt.md` around lines 37 - 42, Update the "Path B — caller wants the dataset" prompt text so it no longer instructs the model to tell the orchestrator it is returning structured data or to imply persistence; instead instruct the subagent spawned via spawn_subagent to return a concise inline structured representation (count, key highlights, representative identifiers) and explicitly state it must not claim it exported, saved, or persisted the data or that the orchestrator performed file I/O. Locate and edit the "Path B — caller wants the dataset" section in prompt.md and remove or rephrase the sentence that says "you are returning the structured data so the orchestrator can persist it" so the child only emits the final textual/structured payload without implying any downstream persistence.
5-12:⚠️ Potential issue | 🟠 MajorDocument
extract_from_resultin the tool surface.Oversized-result runs now add
extract_from_resultas a system tool, but this prompt says the agent has no capability outside the Composio surface. That conflict lands on the exact path meant to handle large payloads, so the model may avoid the extractor even when the runtime expects it to use it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/agents/integrations_agent/prompt.md` around lines 5 - 12, Update the tool surface in the prompt to explicitly document the system tool extract_from_result so the agent knows it is allowed to call it despite the “no other capability” sentence: add a bullet describing extract_from_result (its name, expected input schema/usage, and that it is provided by the runtime for oversized-result runs), and clarify that it is a permitted system tool alongside composio_list_tools and composio_execute so the agent will invoke extract_from_result when handling large payloads.app/src/pages/Conversations.tsx (1)
701-717:⚠️ Potential issue | 🟠 MajorThe first-title trigger now races the assistant reply and can noop permanently.
This calls
generateThreadTitleIfNeededimmediately after the first user message append, butthread_generate_titlebails when no assistant message exists. Since nothing retries it here and noassistantMessageis passed, first-message threads can stay stuck on the"Chat ..."placeholder depending on timing. The detached block also ignoresaddMessageLocalfailure whilechatSendcontinues.Trigger title generation from the
chat_donepath once the first assistant reply is available, or pass the final assistant text intogenerateThreadTitleIfNeeded({ assistantMessage: ... })after that response has been persisted.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Conversations.tsx` around lines 701 - 717, The detached async IIFE that calls generateThreadTitleIfNeeded right after addMessageLocal can race with the assistant reply and silently noop; remove that immediate trigger and instead invoke generateThreadTitleIfNeeded from the chat_done path once the assistant reply is persisted (or, if you prefer keeping it here, await chatSend and pass the persisted assistant text via assistantMessage into generateThreadTitleIfNeeded({ threadId: sendingThreadId, assistantMessage: ... })); also stop ignoring addMessageLocal errors (handle/rethrow/log) so failures don't get lost. Ensure you update usage sites of shouldGenerateTitleFromFirstMessage, userMessage, addMessageLocal, and generateThreadTitleIfNeeded accordingly.
🧹 Nitpick comments (6)
src/openhuman/tools/impl/system/current_time.rs (2)
113-142: Harden tests by parsing JSON and asserting typed fields.Current assertions rely on substring matching, which is brittle. Parsing
result.output()into JSON and asserting key presence/types will make these tests more reliable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/impl/system/current_time.rs` around lines 113 - 142, Update the three tests to parse result.output() as JSON (use serde_json::from_str) instead of substring checks; for returns_utc_and_local assert the top-level object contains keys "utc" and "local" as strings and "unix_seconds" as a number, for converts_requested_timezone parse the JSON and assert presence and type of "requested_timezone" (string) and that its value contains "Asia/Kolkata", and for unknown_timezone_reports_error_field assert the parsed object contains a string "requested_timezone_error"; use CurrentTimeTool::new().execute(...) and result.output() to obtain the raw JSON to parse and unwrap any deserialization errors in the tests.
58-93: Expand debug logging across entry/branches/error paths.This new flow currently logs only at return. Add structured debug logs for execution start, timezone parse success, and invalid-timezone branch to improve traceability.
As per coding guidelines:
src/**/*.rs: “Add substantial, development-oriented logs on new/changed flows; include logs at entry/exit points, branch decisions, external calls, retries/timeouts, state transitions, and error handling paths.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/impl/system/current_time.rs` around lines 58 - 93, The execute method in current_time.rs only logs on return; add structured tracing::debug (or tracing::info where appropriate) at entry of async fn execute(&self, args: ...), immediately after extracting/normalizing timezone input (tz_name/trimmed), inside the Ok(tz) branch when conversion succeeds (include tz name and converted time), and inside the Err(_) branch when timezone parsing fails (include the invalid trimmed value and the error context); ensure logs reference key symbols like now_utc, now_local, tz_name/trimmed, parse::<Tz>(), converted and payload so branch decisions and outcomes are recorded before the final tracing::debug and before returning ToolResult::success.src/openhuman/tools/ops.rs (1)
87-87: Add a regression test assertingcurrent_timeis registered.This new default capability should have a dedicated presence check (similar to
spawn_subagent/complete_onboarding) to prevent accidental removal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/ops.rs` at line 87, Add a regression test that asserts the new default capability "current_time" is registered: locate the tools registration in ops.rs where CurrentTimeTool::new() is added (the Box::new(CurrentTimeTool::new()) entry) and add a test that checks the tools/capabilities registry (the same test pattern used for spawn_subagent and complete_onboarding) contains a "current_time" entry; implement the assertion using the same helper or registry lookup used by the existing spawn_subagent/complete_onboarding tests so the test will fail if "current_time" is accidentally removed.src/openhuman/composio/tools.rs (1)
485-491: Consider addinginit_default_providers()for consistency withops.rs.
ops.rs::composio_execute(Lines 190-200) explicitly callssuper::providers::init_default_providers()before looking up the provider, with a comment explaining that CLI/RPC one-shot paths never boot the bus. This tools.rs path relies on the bus having been started, but ifComposioExecuteTool::executeis ever invoked in a context where the bus hasn't initialized providers, the post-processing will silently be skipped.Since
init_default_providers()is documented as idempotent, adding the same guard here would ensure consistent behavior across all execution paths.Additionally, consider adding a trace/debug log when post-processing is applied to aid debugging per the coding guideline: "Add substantial, development-oriented logs on new/changed flows."
♻️ Proposed change
if resp.successful { if let Some(toolkit) = toolkit_from_slug(&tool) { + // Ensure providers are registered (idempotent; covers edge + // cases where tools are invoked before the bus starts). + super::providers::init_default_providers(); if let Some(provider) = get_provider(&toolkit) { + tracing::trace!( + tool = %tool, + toolkit = %toolkit, + "[composio] applying provider post-processing" + ); provider.post_process_action_result( &tool, arguments.as_ref(), &mut resp.data, ); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/composio/tools.rs` around lines 485 - 491, The post-processing path in ComposioExecuteTool::execute relies on providers being registered and should call super::providers::init_default_providers() (idempotent) before get_provider(&toolkit) to ensure providers exist in one-shot contexts; add that call just before get_provider(&toolkit) and, when a provider is found and provider.post_process_action_result is invoked, emit a trace/debug log indicating post-processing was applied (include tool name/identifier and whether arguments are present) to aid debugging and match ops.rs behavior.src/openhuman/composio/ops.rs (1)
180-201: Excellent documentation; consider adding a trace log for runtime observability.The inline comments clearly explain the design rationale for calling
init_default_providers()in this path. For runtime debugging, consider adding a trace-level log when post-processing is applied, consistent with the coding guideline: "Add substantial, development-oriented logs on new/changed flows."📝 Proposed change
if resp.successful { super::providers::init_default_providers(); if let Some(toolkit) = super::providers::toolkit_from_slug(tool) { if let Some(provider) = super::providers::get_provider(&toolkit) { + tracing::trace!( + tool = %tool, + toolkit = %toolkit, + "[composio] rpc execute: applying provider post-processing" + ); provider.post_process_action_result( tool, arguments.as_ref(), &mut resp.data, ); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/composio/ops.rs` around lines 180 - 201, Add a trace-level log before calling provider.post_process_action_result to improve runtime observability: inside the resp.successful branch (in ops.rs) after toolkit_from_slug(tool) and get_provider(&toolkit) succeed but before provider.post_process_action_result(tool, arguments.as_ref(), &mut resp.data), emit a trace with the toolkit slug, tool name, and whether arguments is present (e.g. tracing::trace!("post-processing result: toolkit={} tool={} has_args={}", toolkit.slug(), tool, arguments.is_some())). This follows the existing logging guideline and uses the same tracing/log crate the repo uses.app/src/utils/__tests__/agentMessageBubbles.test.ts (1)
15-19: Test title and assertion intent are mismatched.The description says “distinct bubbles,” but the expected value is a single bubble. Renaming this test would reduce confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/utils/__tests__/agentMessageBubbles.test.ts` around lines 15 - 19, The test title for the case using splitAgentMessageIntoBubbles is misleading: it says it "splits newline-separated lines into distinct bubbles" but asserts a single-bubble result; either rename the test to reflect that it expects a single bubble or update the expected value to an array of distinct bubbles (e.g., ["First line","Second line","Third line"]). Locate the test in agentMessageBubbles.test.ts around the it(...) that references splitAgentMessageIntoBubbles and change the string description or the expect(...) assertion to make title and expected behavior consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/providers/ChatRuntimeProvider.tsx`:
- Around line 512-520: For non-streamed final replies (event.segment_total ===
0) await dispatch(addInferenceResponse({ content: event.full_response, threadId:
event.thread_id })) before calling dispatch(endInferenceTurn({ threadId:
event.thread_id })) and dispatch(setActiveThread(null)) so the turn stays active
until the final assistant message is appended; for streamed segments keep the
existing flow. Update the code around addInferenceResponse, endInferenceTurn,
and setActiveThread to perform the awaits and move the end/setActive calls
inside the non-streamed branch guarded by event.segment_total.
In `@app/src/services/api/threadApi.ts`:
- Around line 53-59: Add namespaced debug logs around the generateTitleIfNeeded
flow: log an entry message with the threadId and assistantMessage before calling
callCoreRpc, a success message with the returned Envelope/Thread after
unwrapEnvelope, and an error/failure log if callCoreRpc or unwrapEnvelope
throws. Use the existing debug/logger utility and a clear namespace like
"threadApi.generateTitleIfNeeded" (referencing the generateTitleIfNeeded
function, callCoreRpc, and unwrapEnvelope) and ensure logs are
dev-only/detail-only to avoid noisy production output.
In `@app/src/store/threadSlice.ts`:
- Around line 157-167: The thunk currently rejects if the refresh step fails
even when generateTitleIfNeeded succeeded; change the flow so
generateTitleIfNeeded(payload.threadId, payload.assistantMessage) is awaited in
its own try and if it succeeds you return the thread (or proceed) even if
dispatch(loadThreads()).unwrap() later fails. Concretely, call
generateTitleIfNeeded and capture its result; then call
dispatch(loadThreads()).unwrap() inside a separate try/catch that logs or
handles the refresh error but does NOT call rejectWithValue; only call
rejectWithValue when generateTitleIfNeeded throws. Update the code paths around
generateTitleIfNeeded, dispatch(loadThreads), and rejectWithValue to reflect
this separation.
In `@scripts/release/sign-and-notarize-macos.sh`:
- Line 75: The echo uses an unquoted parameter substitution (${item#$APP_PATH/})
which treats $APP_PATH as a glob pattern; change the substitution to use a
quoted expansion and nested braces so the prefix is literal (use
${item#${APP_PATH}/}) and quote the result in the echo to prevent
globbing/word-splitting (i.e., echo "[sign] Signing lib:
\"${item#${APP_PATH}/}\""), updating the echo line that references
${item#$APP_PATH/}.
In `@src/openhuman/agent/agents/orchestrator/prompt.md`:
- Around line 41-45: The guidance in prompt.md currently assumes cron_add is
always available; update the text around current_time / cron_add (the example
using schedule = {kind:"at", at:"<iso-time>"} with job_type:"agent" and the
prompt payload) to be conditional: note that callers must check feature/config
availability of cron_add (or the cron feature flag) and only instruct agents to
call cron_add when it's enabled; if cron_add is disabled or returns an error,
provide an alternate suggested behavior or explicitly say the agent should
inform the user it cannot schedule reminders. Adjust the wording so it no longer
guarantees reminders unconditionally and references current_time, cron_add, the
schedule object, job_type:"agent", and prompt.
In `@src/openhuman/agent/harness/subagent_runner/extract_tool.rs`:
- Around line 314-317: The current code treats "no matches" as an error (e.g.,
the partials.is_empty() branch and the similar branch around lines 373-379) but
the protocol expects a valid empty extraction; change those branches to return a
successful ToolResult containing an empty string (not ToolResult::error) so
callers can distinguish "nothing found" from a tool failure. Locate the branches
that currently call ToolResult::error in extract_from_result (refer to the
partials.is_empty() check and the analogous block later) and replace them to
return a success ToolResult with an empty string payload; preserve the Ok(...)
wrapping and any logging but do not mark these cases as errors.
In `@src/openhuman/config/schema/context.rs`:
- Around line 147-151: Update the documentation for the configuration field
summarizer_payload_threshold_tokens to reflect the new default of 0 (disabled)
instead of 500_000; locate the field declaration or its doc comment in
context.rs (the summarizer_payload_threshold_tokens doc block) and change the
text to state that a value of 0 disables the payload summarizer by default and
describe the behavior when set >0 (i.e., threshold in tokens to enable
summarization), keeping the rest of the description intact.
In `@src/openhuman/context/tool_result_budget.rs`:
- Around line 21-30: The default tool-result budget is set to zero which
disables the inline pre-history truncation and allows oversized parent-session
tool outputs to flow into history; change DEFAULT_TOOL_RESULT_BUDGET_BYTES from
0 to a sensible non-zero default (e.g., a few kilobytes) so
apply_tool_result_budget will run by default; update any related
documentation/comments and ensure this interacts correctly with
ContextConfig::summarizer_payload_threshold_tokens and the Agent::turn caller
semantics so the summarizer/handoff flow still functions while preventing
unbounded parent-loop tool results.
In `@src/openhuman/memory/conversations/store.rs`:
- Around line 136-139: The current log line prints user-derived `title` verbatim
which may leak PII; change the log in the thread-update path to avoid raw
`title` by redacting or replacing it with a safe representation (e.g., fixed
placeholder like "<redacted>", a truncated prefix + length, or a deterministic
hash) and keep non-sensitive fields (`thread_id`, `threads_path.display()`,
`LOG_PREFIX`) intact; update the statement that currently formats `"{LOG_PREFIX}
updated thread title id={} title={} path={}"` to use the chosen redaction method
for `title` wherever it's emitted.
In `@src/openhuman/threads/ops.rs`:
- Around line 239-244: The current logs print user-derived content
(thread.title, updated.title, raw_title) directly; update the logging in the
blocks that call is_auto_generated_thread_title and the other places (the
sections referencing updated.title and raw_title) to avoid leaking PII by
redacting or omitting the full string: log only safe metadata such as a
placeholder flag (e.g., "is_auto_generated=true/false"), the title length, and a
hash or truncated fingerprint (e.g., first N chars or a short hash) instead of
the raw value, and use the same approach for updated.title and raw_title so
tracing::debug calls never include full user text. Ensure the logged fields
reference the same symbols (is_auto_generated_thread_title, thread.title,
updated.title, raw_title) so reviewers can find and confirm the changes.
---
Outside diff comments:
In `@app/src/pages/Conversations.tsx`:
- Around line 701-717: The detached async IIFE that calls
generateThreadTitleIfNeeded right after addMessageLocal can race with the
assistant reply and silently noop; remove that immediate trigger and instead
invoke generateThreadTitleIfNeeded from the chat_done path once the assistant
reply is persisted (or, if you prefer keeping it here, await chatSend and pass
the persisted assistant text via assistantMessage into
generateThreadTitleIfNeeded({ threadId: sendingThreadId, assistantMessage: ...
})); also stop ignoring addMessageLocal errors (handle/rethrow/log) so failures
don't get lost. Ensure you update usage sites of
shouldGenerateTitleFromFirstMessage, userMessage, addMessageLocal, and
generateThreadTitleIfNeeded accordingly.
In `@src/openhuman/agent/agents/integrations_agent/prompt.md`:
- Around line 37-42: Update the "Path B — caller wants the dataset" prompt text
so it no longer instructs the model to tell the orchestrator it is returning
structured data or to imply persistence; instead instruct the subagent spawned
via spawn_subagent to return a concise inline structured representation (count,
key highlights, representative identifiers) and explicitly state it must not
claim it exported, saved, or persisted the data or that the orchestrator
performed file I/O. Locate and edit the "Path B — caller wants the dataset"
section in prompt.md and remove or rephrase the sentence that says "you are
returning the structured data so the orchestrator can persist it" so the child
only emits the final textual/structured payload without implying any downstream
persistence.
- Around line 5-12: Update the tool surface in the prompt to explicitly document
the system tool extract_from_result so the agent knows it is allowed to call it
despite the “no other capability” sentence: add a bullet describing
extract_from_result (its name, expected input schema/usage, and that it is
provided by the runtime for oversized-result runs), and clarify that it is a
permitted system tool alongside composio_list_tools and composio_execute so the
agent will invoke extract_from_result when handling large payloads.
---
Nitpick comments:
In `@app/src/utils/__tests__/agentMessageBubbles.test.ts`:
- Around line 15-19: The test title for the case using
splitAgentMessageIntoBubbles is misleading: it says it "splits newline-separated
lines into distinct bubbles" but asserts a single-bubble result; either rename
the test to reflect that it expects a single bubble or update the expected value
to an array of distinct bubbles (e.g., ["First line","Second line","Third
line"]). Locate the test in agentMessageBubbles.test.ts around the it(...) that
references splitAgentMessageIntoBubbles and change the string description or the
expect(...) assertion to make title and expected behavior consistent.
In `@src/openhuman/composio/ops.rs`:
- Around line 180-201: Add a trace-level log before calling
provider.post_process_action_result to improve runtime observability: inside the
resp.successful branch (in ops.rs) after toolkit_from_slug(tool) and
get_provider(&toolkit) succeed but before
provider.post_process_action_result(tool, arguments.as_ref(), &mut resp.data),
emit a trace with the toolkit slug, tool name, and whether arguments is present
(e.g. tracing::trace!("post-processing result: toolkit={} tool={} has_args={}",
toolkit.slug(), tool, arguments.is_some())). This follows the existing logging
guideline and uses the same tracing/log crate the repo uses.
In `@src/openhuman/composio/tools.rs`:
- Around line 485-491: The post-processing path in ComposioExecuteTool::execute
relies on providers being registered and should call
super::providers::init_default_providers() (idempotent) before
get_provider(&toolkit) to ensure providers exist in one-shot contexts; add that
call just before get_provider(&toolkit) and, when a provider is found and
provider.post_process_action_result is invoked, emit a trace/debug log
indicating post-processing was applied (include tool name/identifier and whether
arguments are present) to aid debugging and match ops.rs behavior.
In `@src/openhuman/tools/impl/system/current_time.rs`:
- Around line 113-142: Update the three tests to parse result.output() as JSON
(use serde_json::from_str) instead of substring checks; for
returns_utc_and_local assert the top-level object contains keys "utc" and
"local" as strings and "unix_seconds" as a number, for
converts_requested_timezone parse the JSON and assert presence and type of
"requested_timezone" (string) and that its value contains "Asia/Kolkata", and
for unknown_timezone_reports_error_field assert the parsed object contains a
string "requested_timezone_error"; use CurrentTimeTool::new().execute(...) and
result.output() to obtain the raw JSON to parse and unwrap any deserialization
errors in the tests.
- Around line 58-93: The execute method in current_time.rs only logs on return;
add structured tracing::debug (or tracing::info where appropriate) at entry of
async fn execute(&self, args: ...), immediately after extracting/normalizing
timezone input (tz_name/trimmed), inside the Ok(tz) branch when conversion
succeeds (include tz name and converted time), and inside the Err(_) branch when
timezone parsing fails (include the invalid trimmed value and the error
context); ensure logs reference key symbols like now_utc, now_local,
tz_name/trimmed, parse::<Tz>(), converted and payload so branch decisions and
outcomes are recorded before the final tracing::debug and before returning
ToolResult::success.
In `@src/openhuman/tools/ops.rs`:
- Line 87: Add a regression test that asserts the new default capability
"current_time" is registered: locate the tools registration in ops.rs where
CurrentTimeTool::new() is added (the Box::new(CurrentTimeTool::new()) entry) and
add a test that checks the tools/capabilities registry (the same test pattern
used for spawn_subagent and complete_onboarding) contains a "current_time"
entry; implement the assertion using the same helper or registry lookup used by
the existing spawn_subagent/complete_onboarding tests so the test will fail if
"current_time" is accidentally removed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4867e856-e4d3-49d5-b778-919933e0cffe
📒 Files selected for processing (45)
app/package.jsonapp/src-tauri/src/lib.rsapp/src-tauri/tauri.conf.jsonapp/src-tauri/vendor/tauri-cefapp/src/pages/Conversations.tsxapp/src/providers/ChatRuntimeProvider.tsxapp/src/services/api/threadApi.test.tsapp/src/services/api/threadApi.tsapp/src/store/chatRuntimeSlice.tsapp/src/store/threadSlice.tsapp/src/utils/__tests__/agentMessageBubbles.test.tsapp/src/utils/__tests__/toolTimelineFormatting.test.tsapp/src/utils/agentMessageBubbles.tsapp/src/utils/toolTimelineFormatting.tsdocs/agent-subagent-tool-flow.mddocs/install.mdscripts/ensure-tauri-cli.shscripts/release/sign-and-notarize-macos.shsrc/openhuman/agent/agents/integrations_agent/prompt.mdsrc/openhuman/agent/agents/orchestrator/agent.tomlsrc/openhuman/agent/agents/orchestrator/prompt.mdsrc/openhuman/agent/harness/subagent_runner/extract_tool.rssrc/openhuman/agent/harness/subagent_runner/handoff.rssrc/openhuman/agent/harness/subagent_runner/mod.rssrc/openhuman/agent/harness/subagent_runner/ops.rssrc/openhuman/agent/harness/subagent_runner/tool_prep.rssrc/openhuman/agent/harness/subagent_runner/types.rssrc/openhuman/composio/ops.rssrc/openhuman/composio/providers/gmail/mod.rssrc/openhuman/composio/providers/gmail/post_process.rssrc/openhuman/composio/providers/gmail/provider.rssrc/openhuman/composio/providers/mod.rssrc/openhuman/composio/providers/post_process.rssrc/openhuman/composio/providers/traits.rssrc/openhuman/composio/tools.rssrc/openhuman/config/schema/context.rssrc/openhuman/context/tool_result_budget.rssrc/openhuman/memory/conversations/mod.rssrc/openhuman/memory/conversations/store.rssrc/openhuman/memory/rpc_models.rssrc/openhuman/threads/ops.rssrc/openhuman/threads/schemas.rssrc/openhuman/tools/impl/system/current_time.rssrc/openhuman/tools/impl/system/mod.rssrc/openhuman/tools/ops.rs
💤 Files with no reviewable changes (3)
- src/openhuman/composio/providers/mod.rs
- app/src-tauri/tauri.conf.json
- src/openhuman/composio/providers/post_process.rs
| void (async () => { | ||
| if (!event.segment_total) { | ||
| await dispatch( | ||
| addInferenceResponse({ content: event.full_response, threadId: event.thread_id }) | ||
| ); | ||
| } | ||
| })(); | ||
| dispatch(endInferenceTurn({ threadId: event.thread_id })); | ||
| dispatch(setActiveThread(null)); |
There was a problem hiding this comment.
Keep the turn active until the non-streamed final reply is appended.
For segment_total === 0, addInferenceResponse(...) now runs in a detached async block while endInferenceTurn and setActiveThread(null) fire immediately. That re-enables the composer before the final assistant message is stored, so a fast follow-up send can land ahead of the just-finished reply.
Possible fix
- void (async () => {
- if (!event.segment_total) {
- await dispatch(
- addInferenceResponse({ content: event.full_response, threadId: event.thread_id })
- );
- }
- })();
- dispatch(endInferenceTurn({ threadId: event.thread_id }));
- dispatch(setActiveThread(null));
+ if (!event.segment_total) {
+ void (async () => {
+ await dispatch(
+ addInferenceResponse({ content: event.full_response, threadId: event.thread_id })
+ );
+ dispatch(endInferenceTurn({ threadId: event.thread_id }));
+ dispatch(setActiveThread(null));
+ })();
+ return;
+ }
+ dispatch(endInferenceTurn({ threadId: event.thread_id }));
+ dispatch(setActiveThread(null));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void (async () => { | |
| if (!event.segment_total) { | |
| await dispatch( | |
| addInferenceResponse({ content: event.full_response, threadId: event.thread_id }) | |
| ); | |
| } | |
| })(); | |
| dispatch(endInferenceTurn({ threadId: event.thread_id })); | |
| dispatch(setActiveThread(null)); | |
| if (!event.segment_total) { | |
| void (async () => { | |
| await dispatch( | |
| addInferenceResponse({ content: event.full_response, threadId: event.thread_id }) | |
| ); | |
| dispatch(endInferenceTurn({ threadId: event.thread_id })); | |
| dispatch(setActiveThread(null)); | |
| })(); | |
| return; | |
| } | |
| dispatch(endInferenceTurn({ threadId: event.thread_id })); | |
| dispatch(setActiveThread(null)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/providers/ChatRuntimeProvider.tsx` around lines 512 - 520, For
non-streamed final replies (event.segment_total === 0) await
dispatch(addInferenceResponse({ content: event.full_response, threadId:
event.thread_id })) before calling dispatch(endInferenceTurn({ threadId:
event.thread_id })) and dispatch(setActiveThread(null)) so the turn stays active
until the final assistant message is appended; for streamed segments keep the
existing flow. Update the code around addInferenceResponse, endInferenceTurn,
and setActiveThread to perform the awaits and move the end/setActive calls
inside the non-streamed branch guarded by event.segment_total.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
app/src/hooks/useUsageState.test.ts (1)
189-190: Prefer behavior assertions over strict fetch call-count internals.Lines 189-190 couple this test to internal refetch mechanics. The state transition assertion (remainingUsd 9 → 7) already validates behavior and is more resilient.
As per coding guidelines
Prefer testing behavior over implementation details in Vitest unit tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/hooks/useUsageState.test.ts` around lines 189 - 190, The test assertions on mock fetch counts (mockGetTeamUsage and mockGetCurrentPlan) tie the spec to refetch implementation; remove those expect(...) call-count checks from useUsageState.test.ts and instead rely on the existing behavioral assertions (the remainingUsd transition from 9 to 7) to validate correct state changes; keep mocks but delete the two lines referencing mockGetTeamUsage and mockGetCurrentPlan call counts so the test verifies behavior (via remainingUsd) rather than internal fetch mechanics.app/src/hooks/usageRefresh.ts (1)
5-15: Add namespaced debug checkpoints for subscribe/unsubscribe/dispatch.This new event path currently has no traceability. Add debug logs for subscription count changes and dispatch count (without payload/PII) so refresh issues are diagnosable.
As per coding guidelines
Add substantial, development-oriented logs on new/changed flows in TypeScript/React app code; use namespaced debug logs and dev-only detail as neededandUse grep-friendly log prefixes ([feature], domain name, or JSON-RPC method) in app code for correlation with sidecar and Tauri output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/hooks/usageRefresh.ts` around lines 5 - 15, The subscribeUsageRefresh/requestUsageRefresh flow lacks traceable logs; add namespaced debug checkpoints in subscribeUsageRefresh (on add and on unsubscribe) and in requestUsageRefresh (on dispatch) that emit grep-friendly prefixes like "[usage-refresh]" plus the current listeners.size and an incrementing dispatch counter (no payload/PII), using a development-only logger (e.g., debug or conditional process.env.NODE_ENV === 'development') to avoid production noise; update references to listeners, subscribeUsageRefresh, requestUsageRefresh, and UsageRefreshListener so each subscription/unsubscribe logs the new count and each requestUsageRefresh logs the dispatch count and current listeners.size.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/hooks/usageRefresh.ts`:
- Around line 12-15: The requestUsageRefresh function currently iterates
listeners and calls each directly, so a thrown error by any listener halts the
loop; update requestUsageRefresh to call each listener inside a try/catch so one
subscriber's exception doesn't stop others from running (i.e., wrap listener()
in try { listener(); } catch (err) { /* log error */ }), using an appropriate
logging mechanism (console.error or the module's logger) to record the failing
listener and the error; keep the loop and listeners symbol names intact.
In `@app/src/hooks/useUsageState.test.ts`:
- Line 184: The test calls the global refresh trigger requestUsageRefresh()
which synchronously updates hook state via setFetchCount; wrap that invocation
in React Testing Library's act(...) to avoid warnings. Edit the test in
useUsageState.test.ts to surround requestUsageRefresh() with act(() => { ... })
(importing act from `@testing-library/react` if not already) so the hook state
update from setFetchCount is executed inside an act block.
---
Nitpick comments:
In `@app/src/hooks/usageRefresh.ts`:
- Around line 5-15: The subscribeUsageRefresh/requestUsageRefresh flow lacks
traceable logs; add namespaced debug checkpoints in subscribeUsageRefresh (on
add and on unsubscribe) and in requestUsageRefresh (on dispatch) that emit
grep-friendly prefixes like "[usage-refresh]" plus the current listeners.size
and an incrementing dispatch counter (no payload/PII), using a development-only
logger (e.g., debug or conditional process.env.NODE_ENV === 'development') to
avoid production noise; update references to listeners, subscribeUsageRefresh,
requestUsageRefresh, and UsageRefreshListener so each subscription/unsubscribe
logs the new count and each requestUsageRefresh logs the dispatch count and
current listeners.size.
In `@app/src/hooks/useUsageState.test.ts`:
- Around line 189-190: The test assertions on mock fetch counts
(mockGetTeamUsage and mockGetCurrentPlan) tie the spec to refetch
implementation; remove those expect(...) call-count checks from
useUsageState.test.ts and instead rely on the existing behavioral assertions
(the remainingUsd transition from 9 to 7) to validate correct state changes;
keep mocks but delete the two lines referencing mockGetTeamUsage and
mockGetCurrentPlan call counts so the test verifies behavior (via remainingUsd)
rather than internal fetch mechanics.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 09149535-7a9f-4ce6-8dfe-6b26cad07959
📒 Files selected for processing (4)
app/src/hooks/usageRefresh.tsapp/src/hooks/useUsageState.test.tsapp/src/hooks/useUsageState.tsapp/src/providers/ChatRuntimeProvider.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- app/src/providers/ChatRuntimeProvider.tsx
…le generation logic - Removed the unused `generateThreadTitleIfNeeded` function call from the `Conversations` component, simplifying the message dispatch logic. - Enhanced error handling during message dispatch by using `unwrap()` to catch and log errors, providing clearer feedback on send failures. - Updated the `ChatRuntimeProvider` to ensure proper error logging when generating thread titles, improving observability of issues related to title generation. These changes streamline the conversation handling process and improve the robustness of error management in the chat system.
…reamline test assertions - Reformatted the `redact_title_for_log` function for better readability by adjusting the formatting of the output string. - Simplified the assertion in the timezone test to enhance clarity and maintainability. These changes contribute to cleaner code and improved test structure in the conversations module.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
src/openhuman/agent/agents/integrations_agent/prompt.md (1)
28-30: Consider making the extraction trigger explicit in this section.To reduce ambiguity, add one sentence here that if a tool returns a
result_idplaceholder, the next step isextract_from_result({ result_id, query })with a narrow query.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/agents/integrations_agent/prompt.md` around lines 28 - 30, The "Handling large tool results" section is ambiguous about how to proceed when a tool returns a placeholder; update the text to explicitly state that when a tool returns a result_id placeholder the agent must call extract_from_result({ result_id, query }) next, and instruct to use a narrowly scoped query that targets only the caller's requested information; reference the section title "Handling large tool results" and the symbols result_id and extract_from_result({ result_id, query }) so readers can find and implement the behavior unambiguously.src/openhuman/agent/agents/orchestrator/prompt.md (1)
74-74: Consider adding language specifiers to fenced code blocks.The example conversation snippets use fenced code blocks without language identifiers, triggering markdownlint warnings. Adding
textas the language (e.g.,```text) would satisfy the linter while maintaining readability.📝 Proposed formatting fix
-``` +```text got it reminder set for 7:42pmApply the same change to the code blocks at lines 82 and 90. </details> Also applies to: 82-82, 90-90 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@src/openhuman/agent/agents/orchestrator/prompt.mdat line 74, The markdown
example conversation fenced code blocks in prompt.md are missing language
specifiers and trigger markdownlint warnings; update each of the example
conversation code fences (the three short conversation blocks currently using
just) by changing them to use a language tag of text (e.g., replace
with ```text) for the blocks referenced around the conversation snippets (the
blocks at the three shown positions) so all three fenced blocks include the
textspecifier.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@app/src/pages/Conversations.tsx:
- Around line 967-971: The UI uses the unfiltered messages length for render
gates causing threads with only hidden messages to appear as "messages exist"
but render no bubbles; compute a hasVisibleMessages boolean from visibleMessages
(e.g., const hasVisibleMessages = visibleMessages.length > 0) and replace any
render-branch checks that currently use messages.length with hasVisibleMessages
so the empty-state ("No messages yet" / suggested-questions) and other branches
rely on the filtered list; ensure latestVisibleMessage and
latestVisibleAgentMessage continue to be derived from visibleMessages.In
@app/src/providers/ChatRuntimeProvider.tsx:
- Around line 515-532: The current flow calls generateThreadTitleIfNeeded even
if addInferenceResponse(...).unwrap() failed, which can rename a thread from a
reply that was never persisted; modify the logic so title generation only runs
after a successful append — e.g., move the
dispatch(generateThreadTitleIfNeeded({...})) into the try block immediately
after the awaited addInferenceResponse(...) or set a success flag and return
early from the catch (while still calling rtLog on errors) so
generateThreadTitleIfNeeded is not invoked when addInferenceResponse fails;
reference addInferenceResponse, generateThreadTitleIfNeeded, and rtLog to locate
the change.In
@app/src/store/threadSlice.ts:
- Around line 151-179: The thunk generateThreadTitleIfNeeded currently returns
the updated Thread but when loadThreads().unwrap() fails the local state isn't
updated; add a local update so the UI reflects the new title: either (A) in the
threads slice extraReducers handle generateThreadTitleIfNeeded.fulfilled and
find/replace the thread in state.thread.threads by matching thread.id (update
its title/metadata), or (B) after thread = await
threadApi.generateTitleIfNeeded(...) and before swallowing the loadThreads
error, dispatch a local updater (e.g. an existing upsert/updateThread action or
create one) to patch the matching thread in state using the returned Thread;
reference generateThreadTitleIfNeeded, loadThreads, the threads array in the
slice, and the Thread id to locate and apply the change.In
@src/openhuman/agent/agents/integrations_agent/prompt.md:
- Line 9: The documentation for the runtime tool extract_from_result incorrectly
states its inputs as tool_name and raw content; update the prompt text in
prompt.md to state the correct argument contract: the tool expects a result_id
and a narrow query (result_id identifies the prior oversized result to inspect,
query selects the slice). Replace any example or descriptive text referencing
tool_name or content with result_id + query and ensure the wording makes clear
to the model to produce tool calls using result_id and query only.In
@src/openhuman/threads/ops.rs:
- Around line 107-109: The current is_auto_generated_thread_title(title: &str)
uses a loose starts_with("Chat ") and will misidentify real titles; update it to
match the exact placeholder format produced by thread_create_new (use the same
string construction/regex used there) or, preferably, add and persist an
explicit boolean placeholder flag on the Thread (or whatever struct is created
in thread_create_new) and use that flag instead of inferring from the title;
ensure the change touches is_auto_generated_thread_title and thread_create_new
so they share the identical detection/assignment logic (or switch callers to
check the new placeholder flag).
Nitpick comments:
In@src/openhuman/agent/agents/integrations_agent/prompt.md:
- Around line 28-30: The "Handling large tool results" section is ambiguous
about how to proceed when a tool returns a placeholder; update the text to
explicitly state that when a tool returns a result_id placeholder the agent must
call extract_from_result({ result_id, query }) next, and instruct to use a
narrowly scoped query that targets only the caller's requested information;
reference the section title "Handling large tool results" and the symbols
result_id and extract_from_result({ result_id, query }) so readers can find and
implement the behavior unambiguously.In
@src/openhuman/agent/agents/orchestrator/prompt.md:
- Line 74: The markdown example conversation fenced code blocks in prompt.md are
missing language specifiers and trigger markdownlint warnings; update each of
the example conversation code fences (the three short conversation blocks
currently using just) by changing them to use a language tag of text (e.g., replacewith ```text) for the blocks referenced around the conversation
snippets (the blocks at the three shown positions) so all three fenced blocks
include thetextspecifier.</details> <details> <summary>🪄 Autofix (Beta)</summary> Fix all unresolved CodeRabbit comments on this PR: - [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended) - [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes </details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `6d6fd2e5-281f-4c70-9416-a8bbd3c01678` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 37140cf1f51f426de327697bd6b7d957027a8ad4 and d0a024ea5fcaab67b8ddb4f8cbced0c124aace85. </details> <details> <summary>📒 Files selected for processing (17)</summary> * `app/src/pages/Conversations.tsx` * `app/src/providers/ChatRuntimeProvider.tsx` * `app/src/services/api/threadApi.ts` * `app/src/store/threadSlice.ts` * `app/src/utils/__tests__/agentMessageBubbles.test.ts` * `scripts/release/sign-and-notarize-macos.sh` * `src/openhuman/agent/agents/integrations_agent/prompt.md` * `src/openhuman/agent/agents/orchestrator/prompt.md` * `src/openhuman/agent/harness/subagent_runner/extract_tool.rs` * `src/openhuman/composio/ops.rs` * `src/openhuman/composio/tools.rs` * `src/openhuman/config/schema/context.rs` * `src/openhuman/context/tool_result_budget.rs` * `src/openhuman/memory/conversations/store.rs` * `src/openhuman/threads/ops.rs` * `src/openhuman/tools/impl/system/current_time.rs` * `src/openhuman/tools/ops.rs` </details> <details> <summary>✅ Files skipped from review due to trivial changes (1)</summary> * app/src/utils/__tests__/agentMessageBubbles.test.ts </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (9)</summary> * src/openhuman/tools/ops.rs * src/openhuman/config/schema/context.rs * src/openhuman/composio/tools.rs * src/openhuman/context/tool_result_budget.rs * app/src/services/api/threadApi.ts * src/openhuman/memory/conversations/store.rs * src/openhuman/composio/ops.rs * src/openhuman/agent/harness/subagent_runner/extract_tool.rs * src/openhuman/tools/impl/system/current_time.rs </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
…ne summary - Modified the `format_inline` function to sort the fact parts before generating the summary string. This change enhances the readability and consistency of the output by ensuring that facts are presented in a stable order. These changes contribute to better formatted summaries in the token juice module.
- Updated error handling in prompt loading to use `std::io::Error::other` for better clarity. - Simplified conditional checks using `is_none_or` and `is_some_and` for improved readability. - Refactored string trimming logic to utilize array syntax for better clarity. - Enhanced default implementations for several structs to reduce boilerplate code. These changes contribute to cleaner code and improved maintainability across various modules.
- Adjusted formatting in `post_process.rs` for better readability by aligning the conditional block. - Combined derive attributes in `tools.rs` for the `ComputerControlConfig` struct to reduce redundancy. - Streamlined entry retrieval in `compatible.rs` by condensing multiple lines into a single line for clarity. These changes enhance code readability and maintainability across the affected modules.
Summary
current_timetool for agentsProblem
Solution
current_timetool.Submission Checklist
app/) and/orcargo test(core) for logic you add or changeapp/test/e2e, mock backend,tests/json_rpc_e2e.rsas appropriate)//////!(Rust), JSDoc or brief file/module headers (TS) on public APIs and non-obvious modulesImpact
cargo checkforapp/src-tauri.Related
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests