fix: keep chat processing alive across tab switches#587
fix: keep chat processing alive across tab switches#587M3gA-Mind merged 11 commits intotinyhumansai:mainfrom
Conversation
Made-with: Cursor
Made-with: Cursor
📝 WalkthroughWalkthroughThis PR centralizes realtime chat event handling into a new ChatRuntimeProvider and Redux chatRuntime slice, migrates runtime UI state from local components into Redux, adjusts App composition to wrap routes with the provider, updates related components, adds tests, and introduces server-side thinking-summary streaming. (≈33 words) Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Chat UI / Conversations
participant CRP as ChatRuntimeProvider
participant Socket as Socket Server
participant Redux as Redux Store
Socket->>CRP: emit events (text_delta, thinking_delta, tool_call, tool_result, done, error)
CRP->>CRP: dedupe event (TTL cache) & update internal refs
CRP->>Redux: dispatch per-thread actions (beginInferenceTurn, setStreamingAssistantForThread, setToolTimelineForThread, markInferenceTurnStreaming, endInferenceTurn, clearRuntimeForThread)
Redux->>UI: state.chatRuntime[threadId] updates (streaming text, thinking summary, tool timeline, lifecycle)
UI->>UI: render streaming/thinking/tool UI based on chatRuntime slice
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/src/components/intelligence/MemoryGraphMap.tsx (1)
199-207:⚠️ Potential issue | 🟡 MinorPrevent stale
selectedNodefrom dimming the entire graph after data changes.When
relationsupdates and the previously selected node no longer exists,selectedNodestays set,connectedIdsbecomes empty, and all nodes render dimmed. Derive an active selection from current nodes before highlight logic.Based on learnings "In React components, do not perform synchronous `setState` directly inside `useEffect`... prefer deriving state from props/render."💡 Suggested fix
- const connectedIds = selectedNode + const activeSelectedNode = selectedNode && nodeMap.has(selectedNode) ? selectedNode : null; + const connectedIds = activeSelectedNode ? new Set( edges - .filter(e => e.source === selectedNode || e.target === selectedNode) + .filter(e => e.source === activeSelectedNode || e.target === activeSelectedNode) .flatMap(e => [e.source, e.target]) ) : null; ... - const isHighlighted = - selectedNode === null || edge.source === selectedNode || edge.target === selectedNode; + const isHighlighted = + activeSelectedNode === null || + edge.source === activeSelectedNode || + edge.target === activeSelectedNode; ... - const isSelected = selectedNode === node.id; - const isDimmed = selectedNode !== null && !connectedIds?.has(node.id); + const isSelected = activeSelectedNode === node.id; + const isDimmed = activeSelectedNode !== null && !connectedIds?.has(node.id);Also applies to: 261-263, 306-307
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/intelligence/MemoryGraphMap.tsx` around lines 199 - 207, selectedNode can become stale when relations/nodes change — instead of clearing it via state or letting it produce an empty connectedIds (which dims everything), derive an "active" selection from the current node list: compute a Set of current node ids (from nodes/relations) and only compute connectedIds when selectedNode exists in that set; otherwise treat connectedIds as null. Update the connectedIds calculation (the block referencing selectedNode, edges, and connectedIds) to check membership in the current nodeId set before filtering edges, and apply the same membership-guard pattern to the other similar blocks mentioned (around the symbols where connectedIds is recomputed at lines ~261-263 and ~306-307); do not call setState synchronously in useEffect — derive from props/render instead.app/src/components/LocalAIDownloadSnackbar.tsx (1)
75-80:⚠️ Potential issue | 🟠 MajorDon’t make dismissal permanent for the whole app session.
Lines 78-80 no longer clear
dismissed, so after one close every later local-AI download stays hidden until reload. That drops the only progress UI for subsequent installs. Please key dismissal to a specific download cycle instead of a process-wide boolean.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/LocalAIDownloadSnackbar.tsx` around lines 75 - 80, The current dismissal state (dismissed) is treated as app-session-wide, so after one close all later downloads stay hidden; update LocalAIDownloadSnackbar to reset dismissed on each new download cycle by detecting the transition edge (not-downloading → downloading) using wasDownloadingRef and isDownloading: when isDownloading becomes true and wasDownloadingRef.current is false, call the setter to clear dismissed for that cycle, then update wasDownloadingRef.current to true; also ensure when downloads finish you set wasDownloadingRef.current back to false so future transitions are detected.
🧹 Nitpick comments (1)
app/src/components/intelligence/MemoryGraphMap.tsx (1)
167-183: Add namespaced debug checkpoints for this new graph recomputation path.This changed flow (
relations→ build/simulate/palette) has no trace logs. Please add dev-oriented, namespaced debug checkpoints (input size, capped sizes, simulation completion/error path) to make regressions easier to diagnose.As per coding guidelines "Add substantial, development-oriented logs on new/changed flows in TypeScript/React app code; use namespaced debug logs and dev-only detail as needed."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/intelligence/MemoryGraphMap.tsx` around lines 167 - 183, Add dev-only, namespaced debug checkpoints around the new graph recompute flow in the useMemo block that handles relations → buildGraph → runSimulation: log the incoming relations length, the rawNodes/rawEdges sizes returned by buildGraph, any caps applied (e.g., if rawNodes.length or rawEdges.length exceeds NAMESPACE_COLORS or other thresholds), and log simulation start/completion or caught errors from runSimulation. Use a consistent namespace like "MemoryGraphMap:graph" or "MemoryGraphMap:simulate" and guard logs behind a development check (process.env.NODE_ENV !== 'production' or existing __DEV__ flag) so debug output shows only in dev builds; reference symbols: relations, buildGraph, rawNodes, rawEdges, NAMESPACE_COLORS, runSimulation, namespacePalette when composing messages. Ensure errors from runSimulation are caught and logged with the same namespace before rethrowing or falling back to empty nodes/edges.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/components/OnboardingOverlay.tsx`:
- Line 32: The boolean userLoadTimedOut can be stale across sessions and makes
userReady true for a new token; replace the bare boolean with a value keyed to
the current token (e.g., store the timed-out token in a ref like
timedOutTokenRef or a map keyed by token) and compute userReady as !!user?._id
|| (token && timedOutTokenRef.current === token). When starting the timeout in
the existing useEffect, set/clear the ref (or map entry) for the specific token
and clear the timeout on token change/unmount; avoid doing synchronous setState
inside the effect by deriving readiness from props/refs in render or updating
state only via the timeout callback so you no longer reuse a stale
userLoadTimedOut between sessions (references: userReady, userLoadTimedOut,
token, currentUser, onboardingCompleted, useEffect).
In `@app/src/providers/ChatRuntimeProvider.tsx`:
- Around line 80-384: Add namespaced dev-only debug logs ("[chat-runtime]")
around the runtime pipeline: log when subscribeChatEvents is called and when its
cleanup/unsubscribe runs; log when markChatEventSeen returns false (dedupe
drops) including eventKey, thread_id and request_id; log per-thread inference
status transitions inside handlers that call setInferenceStatusForThread
(reference handlers onInferenceStart, onIterationStart, onToolCall,
onSubagentSpawned, onSubagentDone, onToolResult) including old vs new
phase/iteration/request_id; and log finalization in onDone and onError
(including event.error_type and thread_id/request_id). Make these logs gated to
development builds (NODE_ENV !== 'production') and use small, grep-friendly
messages like “[chat-runtime] thread=<id> request=<id> action=<...>”.
- Around line 228-239: The onSubagentDone handler is matching running subagent
timeline entries by tool_name only, which can close the wrong row when the same
tool is spawned multiple times; update the matching logic in onSubagentDone
(used with toolTimelineRef, dispatch and setToolTimelineForThread) to include
the spawned instance id (event.spawned_id) when comparing entry.name (e.g. match
`subagent:${event.tool_name}:${event.spawned_id}`) so you only flip the specific
running entry for that spawned subagent to 'success' or 'error'.
- Around line 258-324: Handlers onTextDelta/onThinkingDelta/onToolArgsDelta read
stale values from streamingAssistantRef.current and toolTimelineRef.current
which can cause lost deltas; fetch the latest state directly from the Redux
store inside each handler (e.g. const state = store.getState().chatRuntime and
read the appropriate streaming assistant map and tool timeline array for
event.thread_id) then compute the new StreamingAssistantState or
ToolTimelineEntry[] and dispatch setStreamingAssistantForThread /
setToolTimelineForThread; alternatively, if you prefer refs, synchronously
update streamingAssistantRef.current[event.thread_id] or
toolTimelineRef.current[event.thread_id] with the newly computed value before
dispatch to avoid race conditions.
In `@app/src/store/chatRuntimeSlice.ts`:
- Around line 27-31: The slice currently only models preview buckets and uses
thread.activeThreadId to proxy "sending", which breaks during streaming; update
ChatRuntimeState by adding a per-thread lifecycle map (e.g.,
inferenceStatusByThread: Record<string, 'started'|'done'|'error'|'cancelled'> or
an enum) and implement actions/reducers to set lifecycle states (startInference,
finishInference, failInference, cancelInference) that are invoked from
ChatRuntimeProvider and where addInferenceResponse/streaming assistant events
occur; ensure threadSlice no longer drives the UI enabling/disabling by
activeThreadId but reads the per-thread lifecycle in inferenceStatusByThread to
determine composer state so Cancel remains visible while streaming.
In `@app/test/e2e/specs/conversations-web-channel-flow.spec.ts`:
- Around line 167-210: Test reuses prior conversation so it can pass without a
new response; fix by starting from a fresh thread (or capturing a pre-send
assistant-message count) and asserting the assistant-bubble count increases
after the send. In the "continues in-flight chat when switching tabs" test,
before typing, either click the app's "new conversation" control or call the
existing helper that resets/creates a fresh conversation (e.g., invoke whatever
UI action creates a new thread via navigateToConversations() + new thread
button), then query the DOM for assistant message elements (e.g.,
document.querySelectorAll of the assistant-bubble selector used elsewhere) to
get an initialCount, perform the send and tab-switch logic as written, wait for
the POST via waitForRequest('/openai/v1/chat/completions'), then re-query the
assistant-bubble elements and assert newCount === initialCount + 1 (and keep the
existing waitForText/assertions for error text). Use the test helpers
waitForText, waitForRequest and textExists in the same test to ensure the new
message arrived for this specific send.
---
Outside diff comments:
In `@app/src/components/intelligence/MemoryGraphMap.tsx`:
- Around line 199-207: selectedNode can become stale when relations/nodes change
— instead of clearing it via state or letting it produce an empty connectedIds
(which dims everything), derive an "active" selection from the current node
list: compute a Set of current node ids (from nodes/relations) and only compute
connectedIds when selectedNode exists in that set; otherwise treat connectedIds
as null. Update the connectedIds calculation (the block referencing
selectedNode, edges, and connectedIds) to check membership in the current nodeId
set before filtering edges, and apply the same membership-guard pattern to the
other similar blocks mentioned (around the symbols where connectedIds is
recomputed at lines ~261-263 and ~306-307); do not call setState synchronously
in useEffect — derive from props/render instead.
In `@app/src/components/LocalAIDownloadSnackbar.tsx`:
- Around line 75-80: The current dismissal state (dismissed) is treated as
app-session-wide, so after one close all later downloads stay hidden; update
LocalAIDownloadSnackbar to reset dismissed on each new download cycle by
detecting the transition edge (not-downloading → downloading) using
wasDownloadingRef and isDownloading: when isDownloading becomes true and
wasDownloadingRef.current is false, call the setter to clear dismissed for that
cycle, then update wasDownloadingRef.current to true; also ensure when downloads
finish you set wasDownloadingRef.current back to false so future transitions are
detected.
---
Nitpick comments:
In `@app/src/components/intelligence/MemoryGraphMap.tsx`:
- Around line 167-183: Add dev-only, namespaced debug checkpoints around the new
graph recompute flow in the useMemo block that handles relations → buildGraph →
runSimulation: log the incoming relations length, the rawNodes/rawEdges sizes
returned by buildGraph, any caps applied (e.g., if rawNodes.length or
rawEdges.length exceeds NAMESPACE_COLORS or other thresholds), and log
simulation start/completion or caught errors from runSimulation. Use a
consistent namespace like "MemoryGraphMap:graph" or "MemoryGraphMap:simulate"
and guard logs behind a development check (process.env.NODE_ENV !== 'production'
or existing __DEV__ flag) so debug output shows only in dev builds; reference
symbols: relations, buildGraph, rawNodes, rawEdges, NAMESPACE_COLORS,
runSimulation, namespacePalette when composing messages. Ensure errors from
runSimulation are caught and logged with the same namespace before rethrowing or
falling back to empty nodes/edges.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 302a91ae-45bc-485b-bbac-9d4330256aab
📒 Files selected for processing (11)
app/src/App.tsxapp/src/components/LocalAIDownloadSnackbar.tsxapp/src/components/OnboardingOverlay.tsxapp/src/components/channels/DiscordServerChannelPicker.tsxapp/src/components/intelligence/MemoryGraphMap.tsxapp/src/pages/Conversations.tsxapp/src/providers/ChatRuntimeProvider.tsxapp/src/store/__tests__/chatRuntimeSlice.test.tsapp/src/store/chatRuntimeSlice.tsapp/src/store/index.tsapp/test/e2e/specs/conversations-web-channel-flow.spec.ts
💤 Files with no reviewable changes (1)
- app/src/components/channels/DiscordServerChannelPicker.tsx
Made-with: Cursor
Added a new mechanism to accumulate and send model reasoning text as a separate message during chat interactions. This includes handling "thinking_delta" events to gather reasoning content, formatting it for clarity, and ensuring it is sent before the main response. Updated the StreamingState struct to include a thinking accumulator for this purpose. This enhancement improves user experience by providing insight into the model's reasoning process.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/src/pages/Conversations.tsx (1)
383-403:⚠️ Potential issue | 🟠 MajorGlobal send lock and local disabled state are out of sync.
handleSendMessage()andhandleVoiceRecordToggle()block on anyactiveThreadId, butisSendingnow only follows the selected thread. If thread A is running and the user switches to thread B, the composer looks enabled while every send/voice action becomes a silent no-op.Suggested direction
+ const composerBlocked = Boolean(activeThreadId); const isSending = Boolean( selectedThreadId && (inferenceTurnLifecycleByThread[selectedThreadId] === 'started' || inferenceTurnLifecycleByThread[selectedThreadId] === 'streaming') );Use
composerBlockedfor input / send / suggested-question / voice-control disabling, and keepisSendingfor selected-thread activity UI.Also applies to: 509-509, 681-685
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Conversations.tsx` around lines 383 - 403, The UI disabling logic is out of sync: replace uses of isSending for disabling composer controls with the global composerBlocked flag and ensure the send/voice handlers check composerBlocked (not activeThreadId or isSending tied to the selected thread). Update handleSendMessage and handleVoiceRecordToggle to early-return when composerBlocked is true, and update the composer input / send button / suggested-question / voice-control render logic to reference composerBlocked for disabled state while keeping isSending only for showing per-selected-thread sending activity; touch usages around activeThreadId, selectedThreadId, isSending, and composerBlocked to align behavior.app/src/store/threadSlice.ts (1)
340-343:⚠️ Potential issue | 🟠 MajorKeep the active-thread guard until the turn actually ends.
addInferenceResponseruns for intermediate streamed segments too. If one append fails here, Line 342 clearsactiveThreadIdeven thoughChatRuntimeProviderstill treats the turn as in-flight untilchat_done/chat_error, so a second send can slip in mid-turn.Suggested fix
.addCase(addInferenceResponse.rejected, (state, action) => { state.sendError = action.payload as string; - state.activeThreadId = null; })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/store/threadSlice.ts` around lines 340 - 343, The rejected handler for addInferenceResponse currently clears state.activeThreadId (setting it to null) which prematurely ends the turn; change the addInferenceResponse.rejected case to only set state.sendError (keep state.activeThreadId untouched) and move any clearing of activeThreadId to the code-paths that actually mark the turn finished (e.g., the addInferenceResponse.fulfilled handler when the segment indicates final/completed, or the ChatRuntimeProvider handlers that process chat_done / chat_error); update references in threadSlice (addInferenceResponse.rejected/fulfilled) and ensure ChatRuntimeProvider’s chat_done/chat_error handlers are responsible for clearing state.activeThreadId.
♻️ Duplicate comments (1)
app/src/providers/ChatRuntimeProvider.tsx (1)
123-239:⚠️ Potential issue | 🟠 MajorThese merges can still overwrite back-to-back socket events.
toolTimelineRef.currentandinferenceStatusRef.currentonly refresh after React commits. If multipletool_call,tool_result, orsubagent_*events land in the same render window, these handlers can all derive from the same stale snapshot and the later dispatch drops the earlier update. The delta handlers already switched tostore.getState(); the same fix is still needed here.Suggested direction
- const existing = toolTimelineRef.current[event.thread_id] ?? []; + const existing = + store.getState().chatRuntime.toolTimelineByThread[event.thread_id] ?? [];Apply the same pattern anywhere this provider computes next state from
toolTimelineRef.currentorinferenceStatusRef.current.Also applies to: 240-292, 390-427
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/providers/ChatRuntimeProvider.tsx` around lines 123 - 239, Handlers onToolCall and onToolResult (and any other handlers that compute next state from toolTimelineRef.current or inferenceStatusRef.current) are using potentially stale ref snapshots and can lose back-to-back socket events; change them to read the latest store state (via store.getState()) when deriving "existing"/"current" before computing the next entries/status and before dispatching setToolTimelineForThread or setInferenceStatusForThread. Specifically, in onToolCall and onToolResult replace uses of toolTimelineRef.current[event.thread_id] and inferenceStatusRef.current[event.thread_id] with the corresponding live values from store.getState().chat (or the slice that holds threads/tool timeline/inference status) so each handler merges against the latest store state before calling dispatch; apply the same pattern to any other handlers that compute state from those refs (e.g., subagent_* handlers).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/pages/Conversations.tsx`:
- Around line 13-16: The timeout handler only clears activeThreadId but leaves
per-thread UI state (inferenceStatusByThread, streamingAssistantByThread, tool
timeline) causing stale “Thinking…” UI; update the timeout/cleanup code to fully
end the inference lifecycle by dispatching endInferenceTurn (or
beginInferenceTurn/end pairs as appropriate) for the timed-out thread and also
clear the per-thread entries: remove/clear inferenceStatusByThread and
streamingAssistantByThread for that thread and call
setToolTimelineForThread(threadId, []) to reset the tool timeline; apply the
same changes in both places noted (the import area referencing
beginInferenceTurn/endInferenceTurn/setToolTimelineForThread and the timeout
handling around lines 421–430) so the UI is fully reset after timeout.
In `@src/openhuman/channels/bus.rs`:
- Around line 611-618: The code slices trimmed by byte index
(&trimmed[..MAX_THINKING_CHARS]) which can cut a multi-byte UTF-8 char and
panic; update the logic around MAX_THINKING_CHARS and the temporary slice to
first ensure the byte index is a char boundary (use str::is_char_boundary) or
find the previous valid boundary via trimmed.char_indices() (or loop
decrementing the index until trimmed.is_char_boundary(index)), then compute
boundary with rfind on the valid slice and format the truncated body (keep
symbols: MAX_THINKING_CHARS, trimmed, slice, boundary, body) so no slicing
occurs at a mid-character byte offset.
---
Outside diff comments:
In `@app/src/pages/Conversations.tsx`:
- Around line 383-403: The UI disabling logic is out of sync: replace uses of
isSending for disabling composer controls with the global composerBlocked flag
and ensure the send/voice handlers check composerBlocked (not activeThreadId or
isSending tied to the selected thread). Update handleSendMessage and
handleVoiceRecordToggle to early-return when composerBlocked is true, and update
the composer input / send button / suggested-question / voice-control render
logic to reference composerBlocked for disabled state while keeping isSending
only for showing per-selected-thread sending activity; touch usages around
activeThreadId, selectedThreadId, isSending, and composerBlocked to align
behavior.
In `@app/src/store/threadSlice.ts`:
- Around line 340-343: The rejected handler for addInferenceResponse currently
clears state.activeThreadId (setting it to null) which prematurely ends the
turn; change the addInferenceResponse.rejected case to only set state.sendError
(keep state.activeThreadId untouched) and move any clearing of activeThreadId to
the code-paths that actually mark the turn finished (e.g., the
addInferenceResponse.fulfilled handler when the segment indicates
final/completed, or the ChatRuntimeProvider handlers that process chat_done /
chat_error); update references in threadSlice
(addInferenceResponse.rejected/fulfilled) and ensure ChatRuntimeProvider’s
chat_done/chat_error handlers are responsible for clearing state.activeThreadId.
---
Duplicate comments:
In `@app/src/providers/ChatRuntimeProvider.tsx`:
- Around line 123-239: Handlers onToolCall and onToolResult (and any other
handlers that compute next state from toolTimelineRef.current or
inferenceStatusRef.current) are using potentially stale ref snapshots and can
lose back-to-back socket events; change them to read the latest store state (via
store.getState()) when deriving "existing"/"current" before computing the next
entries/status and before dispatching setToolTimelineForThread or
setInferenceStatusForThread. Specifically, in onToolCall and onToolResult
replace uses of toolTimelineRef.current[event.thread_id] and
inferenceStatusRef.current[event.thread_id] with the corresponding live values
from store.getState().chat (or the slice that holds threads/tool
timeline/inference status) so each handler merges against the latest store state
before calling dispatch; apply the same pattern to any other handlers that
compute state from those refs (e.g., subagent_* handlers).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 416d0819-7562-412d-a763-7edf807751a2
📒 Files selected for processing (8)
app/src/components/OnboardingOverlay.tsxapp/src/pages/Conversations.tsxapp/src/providers/ChatRuntimeProvider.tsxapp/src/store/__tests__/chatRuntimeSlice.test.tsapp/src/store/chatRuntimeSlice.tsapp/src/store/threadSlice.tsapp/test/e2e/specs/conversations-web-channel-flow.spec.tssrc/openhuman/channels/bus.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- app/test/e2e/specs/conversations-web-channel-flow.spec.ts
- app/src/store/tests/chatRuntimeSlice.test.ts
- app/src/store/chatRuntimeSlice.ts
…on environments Refactored the Telegram bot username resolution logic to differentiate between staging and production environments. Introduced constants for default usernames based on the application environment and updated the GitHub Actions workflow to set the appropriate environment variables. This change enhances the flexibility and clarity of bot username management in the application.
- bus.rs: fix UTF-8 char boundary panic in format_thinking_summary truncation - threadSlice.ts: remove premature activeThreadId clear from addInferenceResponse.rejected - Conversations.tsx: add composerBlocked global lock, clear tool timeline in safety timeout - ChatRuntimeProvider.tsx: replace stale toolTimelineRef/inferenceStatusRef reads with live store.getState() calls in all event handlers - LocalAIDownloadSnackbar.tsx: reset dismissed/collapsed on not-downloading → downloading transition edge - MemoryGraphMap.tsx: derive activeSelectedNode to guard stale selectedNode after relations refresh; add debug logs to useMemo graph recompute Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…nloadSnackbar Replace effect-based setState with the React render-phase update pattern so the not-downloading → downloading transition resets dismissed/collapsed without triggering the react-hooks/set-state-in-effect lint warning. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Formatting changes applied by the pre-push hook during the previous commit. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ersations component Updated the Conversations component to replace the endInferenceTurn dispatch with clearRuntimeForThread. This change simplifies the handling of thread runtime state during error scenarios and timeout conditions, ensuring a cleaner state management approach.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
.github/workflows/release.yml (1)
267-268: Consolidate Telegram username env assignment to one source.Line 283-284 re-export values already defined at job scope (Line 267-268). Keeping both increases drift risk.
♻️ Suggested simplification
- name: Configure staging app environment if: inputs.build_target == 'staging' shell: bash run: | echo "OPENHUMAN_APP_ENV=staging" >> "$GITHUB_ENV" echo "VITE_OPENHUMAN_APP_ENV=staging" >> "$GITHUB_ENV" - echo "OPENHUMAN_TELEGRAM_BOT_USERNAME=alphahumantest_bot" >> "$GITHUB_ENV" - echo "VITE_TELEGRAM_BOT_USERNAME=alphahumantest_bot" >> "$GITHUB_ENV"Also applies to: 283-284
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 267 - 268, The two Telegram username environment variables (OPENHUMAN_TELEGRAM_BOT_USERNAME and VITE_TELEGRAM_BOT_USERNAME) are being defined twice—once at the job scope and again later—causing duplication and drift; remove the later re-exports and rely on the single job-level definitions, updating any steps that referenced the duplicate definitions to consume the job-scoped env variables (OPENHUMAN_TELEGRAM_BOT_USERNAME / VITE_TELEGRAM_BOT_USERNAME) instead so the value originates from one place only.app/src/providers/ChatRuntimeProvider.tsx (1)
122-139: Minor inconsistency:onIterationStartreads from ref while other handlers usestore.getState().For consistency with the other handlers that were updated to avoid stale-closure issues, consider reading from the store directly:
onIterationStart: (event: ChatIterationStartEvent) => { - const prev = inferenceStatusRef.current[event.thread_id]; + const prev = store.getState().chatRuntime.inferenceStatusByThread[event.thread_id];This is not a bug since
maxIterationsisn't accumulated (it's just preserved), but aligning the pattern would make the code more consistent and future-proof.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/providers/ChatRuntimeProvider.tsx` around lines 122 - 139, The onIterationStart handler in ChatRuntimeProvider.tsx reads previous state from inferenceStatusRef.current (inferenceStatusRef.current[event.thread_id]) while other handlers read from the Redux store to avoid stale-closure issues; update onIterationStart to read the prior thread status via store.getState().inferenceStatus (or the same selector used elsewhere) to obtain maxIterations before dispatching setInferenceStatusForThread, preserving the existing logic for phase, iteration, and maxIterations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/channels/controllers/ops.rs`:
- Around line 442-455: In telegram_bot_username(), add branch-decision debug
logs that record which source was used: log when OPENHUMAN_TELEGRAM_BOT_USERNAME
is present and its value, when VITE_TELEGRAM_BOT_USERNAME is used and its value,
and when the staging vs production default is chosen (log which default was
returned). Use the project's logging facility (e.g., tracing::debug! or
log::debug!) and include the environment variable names or the constant
identifiers DEFAULT_TELEGRAM_BOT_USERNAME_STAGING /
DEFAULT_TELEGRAM_BOT_USERNAME_PRODUCTION in the messages so callers can trace
the resolution path.
- Around line 445-450: The current env checks in ops.rs (the branches reading
OPENHUMAN_TELEGRAM_BOT_USERNAME and VITE_TELEGRAM_BOT_USERNAME) return values
even if they are empty or whitespace; update those branches to trim the
retrieved string and only return it if !trim().is_empty() so blank overrides are
ignored (i.e., change the two if let Ok(v) = std::env::var(...) checks to
validate v.trim().is_empty() before returning).
---
Nitpick comments:
In @.github/workflows/release.yml:
- Around line 267-268: The two Telegram username environment variables
(OPENHUMAN_TELEGRAM_BOT_USERNAME and VITE_TELEGRAM_BOT_USERNAME) are being
defined twice—once at the job scope and again later—causing duplication and
drift; remove the later re-exports and rely on the single job-level definitions,
updating any steps that referenced the duplicate definitions to consume the
job-scoped env variables (OPENHUMAN_TELEGRAM_BOT_USERNAME /
VITE_TELEGRAM_BOT_USERNAME) instead so the value originates from one place only.
In `@app/src/providers/ChatRuntimeProvider.tsx`:
- Around line 122-139: The onIterationStart handler in ChatRuntimeProvider.tsx
reads previous state from inferenceStatusRef.current
(inferenceStatusRef.current[event.thread_id]) while other handlers read from the
Redux store to avoid stale-closure issues; update onIterationStart to read the
prior thread status via store.getState().inferenceStatus (or the same selector
used elsewhere) to obtain maxIterations before dispatching
setInferenceStatusForThread, preserving the existing logic for phase, iteration,
and maxIterations.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3827c4c9-97cb-44c1-88ed-971ddd04d190
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockapp/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
.github/workflows/release.ymlapp/src/components/LocalAIDownloadSnackbar.tsxapp/src/components/intelligence/MemoryGraphMap.tsxapp/src/pages/Conversations.tsxapp/src/providers/ChatRuntimeProvider.tsxapp/src/store/threadSlice.tssrc/openhuman/channels/bus.rssrc/openhuman/channels/controllers/ops.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- app/src/store/threadSlice.ts
- app/src/components/intelligence/MemoryGraphMap.tsx
| /// Resolve the managed Telegram bot username from env, or from staging vs production defaults using | ||
| /// `OPENHUMAN_APP_ENV` / `VITE_OPENHUMAN_APP_ENV` (via `app_env_from_env`). | ||
| fn telegram_bot_username() -> String { | ||
| std::env::var("OPENHUMAN_TELEGRAM_BOT_USERNAME") | ||
| .or_else(|_| std::env::var("VITE_TELEGRAM_BOT_USERNAME")) | ||
| .unwrap_or_else(|_| DEFAULT_TELEGRAM_BOT_USERNAME.to_string()) | ||
| if let Ok(v) = std::env::var("OPENHUMAN_TELEGRAM_BOT_USERNAME") { | ||
| return v; | ||
| } | ||
| if let Ok(v) = std::env::var("VITE_TELEGRAM_BOT_USERNAME") { | ||
| return v; | ||
| } | ||
| if is_staging_app_env(app_env_from_env().as_deref()) { | ||
| return DEFAULT_TELEGRAM_BOT_USERNAME_STAGING.to_string(); | ||
| } | ||
| DEFAULT_TELEGRAM_BOT_USERNAME_PRODUCTION.to_string() | ||
| } |
There was a problem hiding this comment.
Add branch-decision debug logs for bot username resolution.
This new staging/production resolution path has no direct breadcrumb for which source was chosen, which makes routing regressions harder to trace.
As per coding guidelines "Add substantial, development-oriented logs while implementing features or fixes so issues are easy to trace end-to-end; log critical checkpoints including ... branch decisions ...".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/channels/controllers/ops.rs` around lines 442 - 455, In
telegram_bot_username(), add branch-decision debug logs that record which source
was used: log when OPENHUMAN_TELEGRAM_BOT_USERNAME is present and its value,
when VITE_TELEGRAM_BOT_USERNAME is used and its value, and when the staging vs
production default is chosen (log which default was returned). Use the project's
logging facility (e.g., tracing::debug! or log::debug!) and include the
environment variable names or the constant identifiers
DEFAULT_TELEGRAM_BOT_USERNAME_STAGING / DEFAULT_TELEGRAM_BOT_USERNAME_PRODUCTION
in the messages so callers can trace the resolution path.
| if let Ok(v) = std::env::var("OPENHUMAN_TELEGRAM_BOT_USERNAME") { | ||
| return v; | ||
| } | ||
| if let Ok(v) = std::env::var("VITE_TELEGRAM_BOT_USERNAME") { | ||
| return v; | ||
| } |
There was a problem hiding this comment.
Guard against empty Telegram bot username overrides.
Line 445 and Line 448 currently return env values as-is; blank/whitespace overrides will generate invalid t.me links.
🔧 Proposed fix
fn telegram_bot_username() -> String {
if let Ok(v) = std::env::var("OPENHUMAN_TELEGRAM_BOT_USERNAME") {
- return v;
+ let normalized = v.trim().trim_start_matches('@').trim();
+ if !normalized.is_empty() {
+ return normalized.to_string();
+ }
}
if let Ok(v) = std::env::var("VITE_TELEGRAM_BOT_USERNAME") {
- return v;
+ let normalized = v.trim().trim_start_matches('@').trim();
+ if !normalized.is_empty() {
+ return normalized.to_string();
+ }
}
if is_staging_app_env(app_env_from_env().as_deref()) {
return DEFAULT_TELEGRAM_BOT_USERNAME_STAGING.to_string();
}
DEFAULT_TELEGRAM_BOT_USERNAME_PRODUCTION.to_string()
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/channels/controllers/ops.rs` around lines 445 - 450, The
current env checks in ops.rs (the branches reading
OPENHUMAN_TELEGRAM_BOT_USERNAME and VITE_TELEGRAM_BOT_USERNAME) return values
even if they are empty or whitespace; update those branches to trim the
retrieved string and only return it if !trim().is_empty() so blank overrides are
ignored (i.e., change the two if let Ok(v) = std::env::var(...) checks to
validate v.trim().is_empty() before returning).
Resolves conflicts in threadSlice.ts and Conversations.tsx: - Drop obsolete subscribeChatEvents useEffect — now handled by ChatRuntimeProvider from upstream (tinyhumansai#587). - Keep /new and /clear slash command handling from feat/threads. - Use composerBlocked guard (upstream) in handleSendMessage.
Summary
ChatRuntimeProviderso route changes do not drop in-flight socket eventschatRuntimeslice) and makeConversationsrender from selectors instead of route-local event stateTest plan
yarn --cwd app test:unit --run src/store/__tests__/chatRuntimeSlice.test.tsyarn --cwd app compileyarn --cwd app lintformat:check,lint,compile,rust:check) passRelated issues
Made with Cursor
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes