cue: pre-production hardening — forwarded sources, lifecycle, correctness, per-project layout#850
cue: pre-production hardening — forwarded sources, lifecycle, correctness, per-project layout#850reachrazamair merged 6 commits intorcfrom
Conversation
Pipeline save called computeCommonAncestorPath / isDescendantOrEqual from the renderer (usePipelinePersistence), but the module imported Node's `path` which the renderer strips — every save threw "path.resolve is not a function". Rewrite with pure string manipulation so it runs in both main and renderer. Same public API; adds Windows path coverage and a regression guard against reintroducing a Node `path` import.
Pipeline users couldn't see what data reached a downstream agent via forwarding — only direct upstream edges were listed on the Upstream Sources panel, so inputs transitively flowing through relays were invisible. Now every downstream agent's panel lists direct AND forwarded sources in one unified box, each with the per-source template-variable chip users reference in prompts. Also fixes a latent bug where the single-source completion path bypassed `include_output_from` / `forward_output_from` — those filters only applied to fan-in, so any UI toggle silently no-op'd on 1-source chains. Extract the shared filter into `cue-output-filter.ts` and route both paths through it. - `UpstreamSourcesPanel` extracted from `AgentConfigPanel` (pure SRP split) with forwarded rows alongside direct rows. - `transitiveUpstream` pure graph walk honors `forwardOutput` flags, dedupes shared sources, terminates on cycles. - `cue-output-filter` consolidates include/forward semantics; merges upstream-forwarded through `mergeUpstreamForwarded` with filter-list awareness. Tests: +57 across new modules; regression gates for the single-source bypass in `cue-completion-chains`. Full suite clean (25,414 passing).
… layout Pre-release hardening pass covering feature-flag lifecycle, engine correctness, per-project layout scoping, and UI consistency. Feature flag & toggle lifecycle - Add maestroCue (plus symphony, usageStats) to encoreFeatures default so fresh installs don't spuriously call disable() on first render. - Auto-close CueModal + CueYamlEditor when the flag toggles OFF. - Serialize enable/disable IPC on a Promise chain so rapid ON→OFF→ON toggles apply in user-triggered order, not IPC-response order. - Surface toggle failures as user-visible toasts (was console.error only). - Emit engineStarted/engineStopped activity-update events so the renderer refreshes queue counters immediately instead of waiting for the 10s poll. Engine correctness - Finalize DB status when onCueRun resolves after engine.stop(): prior behavior left runs stuck at `running` in the activity log forever. - Wrap output-prompt phase in try/finally so the DB row is always updated under double-failure. - Record stale queued events as `timeout` entries with droppedFromQueue payload so users can see why their run never fired. - UTF-8-safe slicing via new cue-text-utils.ts (sliceTailByChars / sliceHeadByChars); swapped in at all SOURCE_OUTPUT_MAX_CHARS call sites to avoid splitting surrogate pairs when forwarding agent output. - Guard negative sleep gaps in recovery service (clock moved backward). - Torn flag on YAML watcher so events slipping past watcher.close() are rejected rather than triggering refresh on a torn-down session. Per-project pipeline layout - PipelineLayoutState gains a `perProject` map keyed by project root (selectedPipelineId + viewport). Save under the selected pipeline's project root, preserving sibling projects' state. - V1 → V2 migration folds legacy top-level fields into perProject under a __default__ key; saves always stamp version: 2. UI consistency - Reset selectedNodeId/selectedEdgeId when the owning pipeline disappears (prevents stale composite IDs rendering empty panels). - Toast refreshSession failures in CueYamlEditor — file IS on disk, user needs to know the engine didn't reload. - Harden prompt-file path validation: reject empty keys, normalize Windows backslashes on non-Windows, reject `..` segments before path.resolve. Test coverage — 56 new tests, zero regressions - cue-text-utils: 18 surrogate-pair boundary tests - pipeline-layout-store: 8 v1 → v2 migration + roundtrip tests - cue-yaml-roundtrip: 20 end-to-end YAML + prompt file persistence tests - cue-yaml-error-paths: 8 watcher torn-flag, negative-gap, validation tests - cue-run-manager: 2 engine-stop-mid-run DB finalization tests - useCueAutoDiscovery: 2 rapid-toggle serialization + ordering tests - cue-ipc-handlers: 7 path-traversal rejection tests Validated with full lint (tsc across 3 configs) and 25,483-test suite passing on cue-polish branch.
📝 WalkthroughWalkthroughUnifies Cue output processing with new filtering and UTF‑16‑safe slicing, hardens config watching and IPC prompt handling, improves run finalization and structured logging, adds transitive upstream UI logic and pipeline-layout v2 migration, and expands comprehensive tests across backend and renderer. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (9)
src/main/ipc/handlers/cue.ts (1)
234-275: Path validation is solid; one optional tightening.The layered checks (non-empty, not absolute, normalize
\→/on POSIX, explicit..segment rejection, containment check,.mdenforcement) cover the traversal classes I could think of — including Windows-authored keys likeprompts\sub.mdbeing treated as real path separators on macOS/Linux.Minor nit (optional): Line 253's check only rejects segments exactly equal to
... Keys likefoo/.(a single-dot segment) still pass topath.resolve, which normalizes them away — harmless but inconsistent. If you want strictness, also reject.segments. Not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/ipc/handlers/cue.ts` around lines 234 - 275, The current segment rejection only blocks '..' but allows '.' (e.g., "foo/.") which is harmless but inconsistent; update the validation after computing normalizedKey in the cue write flow (look for normalizedKey and its split(/[/\\]/).some(...) check) to reject segments that are '.' or '..' (e.g., change the predicate to segment === '.' || segment === '..') and throw a similar Error like the existing `"cue:writeYaml: promptFiles key \"${relativePath}\" contains parent-directory segment"` (adjust message to mention "."/parent-directory segment if desired) so both single-dot and double-dot segments are refused before path.resolve is used.src/__tests__/main/cue/cue-ipc-handlers.test.ts (1)
339-409: Assert config write is also blocked on invalidpromptFileskeysRight now the negative cases only assert
writeCuePromptFileis not called. Addexpect(writeCueConfigFile).not.toHaveBeenCalled()in each rejection case so partial YAML writes are caught too.Suggested test hardening
await expect( handler(null, { projectRoot: '/projects/test', content: 'subscriptions: []', promptFiles: { '': 'x' }, }) ).rejects.toThrow(/must be a non-empty string/); expect(writeCuePromptFile).not.toHaveBeenCalled(); +expect(writeCueConfigFile).not.toHaveBeenCalled();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/main/cue/cue-ipc-handlers.test.ts` around lines 339 - 409, The tests for 'cue:writeYaml' currently check that writeCuePromptFile is not called on invalid promptFiles keys but miss asserting that writeCueConfigFile is also not invoked; update each negative test case (those using registerAndGetHandler('cue:writeYaml') and expecting rejects for empty string, absolute paths, parent-directory segments, resolving outside .maestro/prompts, mixed-in parent segments, and non-.md extensions) to add expect(writeCueConfigFile).not.toHaveBeenCalled() alongside the existing expect(writeCuePromptFile).not.toHaveBeenCalled() so partial config writes are also prevented.src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx (1)
356-362: Optional: memoize the derivedpipelinelookup.
pipelines.find((p) => p.nodes.some((n) => n.id === node.id))runs on every render and walks every pipeline/node pair.agentPipelinesat line 106 already computes a related derivation (filter by sessionId). Consider extracting a singleuseMemothat derives both, or at minimum memoize this one. Not urgent — the dataset is typically small.♻️ Possible refactor
+ const containingPipeline = useMemo( + () => pipelines.find((p) => p.nodes.some((n) => n.id === node.id)), + [pipelines, node.id] + ); ... <UpstreamSourcesPanel theme={theme} incomingAgentEdges={incomingAgentEdges ?? []} onUpdateEdge={onUpdateEdge} - pipeline={pipelines.find((p) => p.nodes.some((n) => n.id === node.id))} + pipeline={containingPipeline} targetNodeId={node.id} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx` around lines 356 - 362, The inline pipeline lookup passed to UpstreamSourcesPanel is recalculated on every render; wrap the computation pipelines.find((p) => p.nodes.some((n) => n.id === node.id)) in a useMemo (or extend the existing agentPipelines memo) to memoize the derived pipeline by depending on pipelines and node.id; update the prop to pass the memoized value instead of computing inline so UpstreamSourcesPanel receives a stable reference.src/renderer/hooks/useCueAutoDiscovery.ts (1)
86-123: LGTM — serialized toggle chain correctly implements the ordering invariant.
prevMaestroCueEnabledRef.currentis updated synchronously (line 80) before the chain step is queued, so each rerender captures its own transition on the chain regardless of IPC latency. The inner try/catch per branch keeps the chain alive across failures, which is the right call for a long-lived hook.Optional: consider threading enable/disable failures through
captureExceptionfromsrc/utils/sentry.tsin addition to the toast. As per coding guidelines, unexpected errors should flow to Sentry for production tracking — a failed IPC here is worth an observability breadcrumb even when the user-facing toast already handles recovery.♻️ Possible addition
} catch (err) { console.error('[CueAutoDiscovery] Failed to enable Cue:', err); + captureException(err instanceof Error ? err : new Error(String(err)), { + extra: { operation: 'cue.enable' }, + }); notifyToast({As per coding guidelines: "Use Sentry utilities (
captureException,captureMessage) fromsrc/utils/sentry.tsfor explicit error reporting with context."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/hooks/useCueAutoDiscovery.ts` around lines 86 - 123, Add Sentry reporting for enable/disable failures by calling captureException from src/utils/sentry.ts inside both catch blocks in the toggleChain step (the catches that currently console.error('[CueAutoDiscovery] Failed to enable Cue:', err) and console.error('[CueAutoDiscovery] Failed to disable Cue:', err)). After logging the error and before notifyToast, invoke captureException(err, { extra: { action: 'maestro.cue.enable' }}) for the enable branch and captureException(err, { extra: { action: 'maestro.cue.disable' }}) for the disable branch (or similar context) so IPC failures are sent to Sentry alongside the toast.src/main/cue/pipeline-layout-store.ts (2)
82-88: Minor: defensive type check for parsed content.If the on-disk JSON is valid but not an object (e.g.,
null, an array, or a string from a corrupted write),migrateLegacyLayoutreceives it and returns a malformed object (e.g., spreading a string yields index-keyed chars) rather than being treated as invalid. A guard would make the file-corruption path returnnullcleanly like other error paths do.🛡️ Possible guard
const parsed = JSON.parse(content) as PipelineLayoutState; + if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) { + return null; + } return migrateLegacyLayout(parsed);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/pipeline-layout-store.ts` around lines 82 - 88, Guard the parsed JSON before calling migrateLegacyLayout: after const parsed = JSON.parse(content) as PipelineLayoutState, validate that parsed is a plain non-null object (e.g., typeof parsed === 'object' && parsed !== null && !Array.isArray(parsed')); if it fails, call captureException/new Error with context ({ extra: { filePath, operation: 'cue.loadPipelineLayout', reason: 'invalid parsed type' }}) and return null so corrupted non-object JSON (null/array/string) doesn't get passed into migrateLegacyLayout; keep migrateLegacyLayout call only for valid object inputs.
44-61: Optional: strip legacy top-level fields after migration.
{ ...layout, version: 2, perProject }preserves the now-obsolete top-levelselectedPipelineIdandviewport, which then get re-written to disk on every subsequent save (they're in the spread at line 68). Functionally harmless — readers only consumeperProject— but the file never naturally drops the legacy shape.♻️ Possible cleanup
- return { ...layout, version: 2, perProject }; + const { selectedPipelineId: _legacySelected, viewport: _legacyViewport, ...rest } = layout; + return { ...rest, version: 2, perProject };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/pipeline-layout-store.ts` around lines 44 - 61, The migration leaves obsolete top-level fields selectedPipelineId and viewport in the returned object, causing them to be re-saved; update migrateLegacyLayout to return the migrated state without those legacy top-level fields by constructing the result that spreads layout but omits selectedPipelineId and viewport (or delete those keys from a shallow copy) and then adds version: 2 and perProject, referencing the migrateLegacyLayout function and the selectedPipelineId/viewport properties so the legacy shape is not persisted after migration.src/shared/cue-path-utils.ts (1)
18-22: Minor:detectSeparatortreats mixed-slash paths as Windows.Any input containing a single
\is classified as backslash-separated, which then drives the entire normalize/split. Forward-slash paths on Windows (e.g.C:/foo/bar, which Node accepts and returns in some contexts) will instead take the/branch since the regex requires\. The current call sites feed os-native paths fromsession.projectRoot, so this is unlikely to bite in practice, but worth a comment noting the assumption about consistent separators within a single input set.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/cue-path-utils.ts` around lines 18 - 22, The detectSeparator function currently treats any path containing a backslash as Windows-style; update detectSeparator to handle mixed separators by checking for both '/' and '\' and returning the separator that occurs first in the string (if both present), otherwise fall back to the existing checks (drive-letter/backslash or includes backslash → '\\', else '/'). Change the implementation of detectSeparator and add a short comment above it explaining the assumption about consistent separators within inputs and the chosen first-occurrence tie-breaker; reference the function name detectSeparator and any callers that rely on normalize/split behavior.src/main/cue/cue-output-filter.ts (1)
28-53: Per-source map key collision on duplicatesessionName.
perSourceOutputs[c.sessionName] = c.output(Line 39) andforwardedOutputs[c.sessionName] = c.output(Line 47) silently overwrite when two completions share a sessionName (possible if the user renames or has multiple sessions with the same display name across projects). The fan-in tracker likely guarantees uniqueness in practice, but a brief JSDoc note — or deduping by sessionId and surfacing a collision warning — would make the contract explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/cue-output-filter.ts` around lines 28 - 53, buildFilteredOutputs currently uses sessionName as the map key (perSourceOutputs and forwardedOutputs) which can silently overwrite entries when multiple completions share the same sessionName; update the function to use sessionId as the canonical map key (or a composite key like `${sessionName}:${sessionId}`) when populating perSourceOutputs and forwardedOutputs, and add a collision-detection path that logs or surfaces a warning if a sessionName is duplicated (check existing keys before assignment and include sessionId in the warning). Also add a short JSDoc on buildFilteredOutputs describing the keying contract (that maps are keyed by sessionId or composite key) so callers know uniqueness guarantees.src/main/cue/cue-run-manager.ts (1)
140-154: Redundant DB write for dropped queue event.
safeRecordCueEventalready inserts the row withstatus: 'timeout', so the immediatesafeUpdateCueEventStatus(droppedRunId, 'timeout')on Line 154 is a no-op update of a row that was just written with the same status. It's harmless but adds an extra SQL round-trip per stale drop. Either drop the update call, or (if you want to mirror the running→final status pattern used elsewhere) record as'running'first and then update to'timeout'for consistency.♻️ Proposed simplification
safeRecordCueEvent({ id: droppedRunId, type: entry.event.type, triggerName: entry.event.triggerName, sessionId, subscriptionName: entry.subscriptionName, status: 'timeout', payload: JSON.stringify({ ...entry.event.payload, droppedFromQueue: true, queuedForMs: ageMs, }), }); - safeUpdateCueEventStatus(droppedRunId, 'timeout');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/cue-run-manager.ts` around lines 140 - 154, The code creates droppedRunId and calls safeRecordCueEvent(...) with status: 'timeout' then immediately calls safeUpdateCueEventStatus(droppedRunId, 'timeout'), which is a redundant DB write; remove the no-op by deleting the safeUpdateCueEventStatus(droppedRunId, 'timeout') call, or if you prefer to keep the running→final pattern used elsewhere, change the initial call to safeRecordCueEvent(...) to use status: 'running' and then keep safeUpdateCueEventStatus(droppedRunId, 'timeout') to flip it to final state; locate the calls by the symbols droppedRunId, safeRecordCueEvent, and safeUpdateCueEventStatus to make the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts`:
- Around line 71-85: The test currently calls cleanup() before any watched
change is emitted so it doesn't exercise the debounced path; update the test
that uses watchCueConfigFile(projectRoot, onChange) to first trigger a file
change in the watched config (e.g., touch or write the cue config under
projectRoot to cause the watcher to schedule the debounced callback), then
immediately call cleanup(), then await beyond the 1s debounce window and assert
onChange was not called; reference watchCueConfigFile, onChange, cleanup and
projectRoot when locating and changing the test.
In `@src/__tests__/shared/cue-path-utils.test.ts`:
- Around line 82-86: The current test only dynamically imports
'../../shared/cue-path-utils' which can pass even if the file contains Node
built-in imports; change the test to add a static source assertion by reading
the module source (fs.readFileSync) and assert it does not contain patterns for
importing the path builtin (e.g. /(?:require\(|from\s+['"])(?:node:)?path['"]/
or similar), then still import the module to ensure it evaluates; update the
test case in src/__tests__/shared/cue-path-utils.test.ts that uses
import('../../shared/cue-path-utils') to include this additional read-and-regex
assertion against the file contents.
In `@src/main/cue/cue-output-filter.ts`:
- Around line 62-75: mergeUpstreamForwarded only checks forward_output_from
names but buildFilteredOutputs matches either sessionName or sessionId; update
mergeUpstreamForwarded to accept the same matching semantics by testing both the
upstream map key (name) and the corresponding sessionId when deciding to include
an entry: resolve the sessionId for the upstream key (from the same session
registry/metadata used elsewhere or from the upstreamForwarded payload if you
add sessionId there) and change the inclusion check to forwardSet.has(name) ||
forwardSet.has(sessionId). Keep the rest of the function (merged copy) the same
so upstream-forwarded entries are not silently dropped when users configured
forward_output_from with session IDs.
In `@src/main/cue/cue-run-manager.ts`:
- Around line 245-256: The log emitted in the engine-stop branch (inside the if
(!activeRuns.has(runId)) block) currently calls deps.onLog with only a string;
change this to include the structured runFinished payload so live listeners see
the transition: call deps.onLog('cue', { type: 'runFinished', runId, sessionId,
subscriptionName, status: runResult.status }) after calling
safeUpdateCueEventStatus(runId, runResult.status) (keeping the human-readable
message if desired as a second argument or inside the same payload per project
conventions) so behavior matches the other run-finish cases.
In `@src/shared/cue-path-utils.ts`:
- Around line 34-42: The normalize function collapses runs of separators which
strips the leading double backslash of UNC paths; update normalize to detect and
preserve the UNC prefix (leading "\\\\") before applying the collapse regex or
special-case collapsing so it only collapses internal runs, and ensure
isWindowsRoot is extended or complemented to recognize UNC roots (e.g.,
"\\\\server\\share") so downstream functions like isDescendantOrEqual and
computeCommonAncestorPath get valid paths; add unit tests for UNC examples such
as "\\\\server\\share\\path" and "\\\\server\\share" to prevent regressions.
---
Nitpick comments:
In `@src/__tests__/main/cue/cue-ipc-handlers.test.ts`:
- Around line 339-409: The tests for 'cue:writeYaml' currently check that
writeCuePromptFile is not called on invalid promptFiles keys but miss asserting
that writeCueConfigFile is also not invoked; update each negative test case
(those using registerAndGetHandler('cue:writeYaml') and expecting rejects for
empty string, absolute paths, parent-directory segments, resolving outside
.maestro/prompts, mixed-in parent segments, and non-.md extensions) to add
expect(writeCueConfigFile).not.toHaveBeenCalled() alongside the existing
expect(writeCuePromptFile).not.toHaveBeenCalled() so partial config writes are
also prevented.
In `@src/main/cue/cue-output-filter.ts`:
- Around line 28-53: buildFilteredOutputs currently uses sessionName as the map
key (perSourceOutputs and forwardedOutputs) which can silently overwrite entries
when multiple completions share the same sessionName; update the function to use
sessionId as the canonical map key (or a composite key like
`${sessionName}:${sessionId}`) when populating perSourceOutputs and
forwardedOutputs, and add a collision-detection path that logs or surfaces a
warning if a sessionName is duplicated (check existing keys before assignment
and include sessionId in the warning). Also add a short JSDoc on
buildFilteredOutputs describing the keying contract (that maps are keyed by
sessionId or composite key) so callers know uniqueness guarantees.
In `@src/main/cue/cue-run-manager.ts`:
- Around line 140-154: The code creates droppedRunId and calls
safeRecordCueEvent(...) with status: 'timeout' then immediately calls
safeUpdateCueEventStatus(droppedRunId, 'timeout'), which is a redundant DB
write; remove the no-op by deleting the safeUpdateCueEventStatus(droppedRunId,
'timeout') call, or if you prefer to keep the running→final pattern used
elsewhere, change the initial call to safeRecordCueEvent(...) to use status:
'running' and then keep safeUpdateCueEventStatus(droppedRunId, 'timeout') to
flip it to final state; locate the calls by the symbols droppedRunId,
safeRecordCueEvent, and safeUpdateCueEventStatus to make the change.
In `@src/main/cue/pipeline-layout-store.ts`:
- Around line 82-88: Guard the parsed JSON before calling migrateLegacyLayout:
after const parsed = JSON.parse(content) as PipelineLayoutState, validate that
parsed is a plain non-null object (e.g., typeof parsed === 'object' && parsed
!== null && !Array.isArray(parsed')); if it fails, call captureException/new
Error with context ({ extra: { filePath, operation: 'cue.loadPipelineLayout',
reason: 'invalid parsed type' }}) and return null so corrupted non-object JSON
(null/array/string) doesn't get passed into migrateLegacyLayout; keep
migrateLegacyLayout call only for valid object inputs.
- Around line 44-61: The migration leaves obsolete top-level fields
selectedPipelineId and viewport in the returned object, causing them to be
re-saved; update migrateLegacyLayout to return the migrated state without those
legacy top-level fields by constructing the result that spreads layout but omits
selectedPipelineId and viewport (or delete those keys from a shallow copy) and
then adds version: 2 and perProject, referencing the migrateLegacyLayout
function and the selectedPipelineId/viewport properties so the legacy shape is
not persisted after migration.
In `@src/main/ipc/handlers/cue.ts`:
- Around line 234-275: The current segment rejection only blocks '..' but allows
'.' (e.g., "foo/.") which is harmless but inconsistent; update the validation
after computing normalizedKey in the cue write flow (look for normalizedKey and
its split(/[/\\]/).some(...) check) to reject segments that are '.' or '..'
(e.g., change the predicate to segment === '.' || segment === '..') and throw a
similar Error like the existing `"cue:writeYaml: promptFiles key
\"${relativePath}\" contains parent-directory segment"` (adjust message to
mention "."/parent-directory segment if desired) so both single-dot and
double-dot segments are refused before path.resolve is used.
In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx`:
- Around line 356-362: The inline pipeline lookup passed to UpstreamSourcesPanel
is recalculated on every render; wrap the computation pipelines.find((p) =>
p.nodes.some((n) => n.id === node.id)) in a useMemo (or extend the existing
agentPipelines memo) to memoize the derived pipeline by depending on pipelines
and node.id; update the prop to pass the memoized value instead of computing
inline so UpstreamSourcesPanel receives a stable reference.
In `@src/renderer/hooks/useCueAutoDiscovery.ts`:
- Around line 86-123: Add Sentry reporting for enable/disable failures by
calling captureException from src/utils/sentry.ts inside both catch blocks in
the toggleChain step (the catches that currently
console.error('[CueAutoDiscovery] Failed to enable Cue:', err) and
console.error('[CueAutoDiscovery] Failed to disable Cue:', err)). After logging
the error and before notifyToast, invoke captureException(err, { extra: {
action: 'maestro.cue.enable' }}) for the enable branch and captureException(err,
{ extra: { action: 'maestro.cue.disable' }}) for the disable branch (or similar
context) so IPC failures are sent to Sentry alongside the toast.
In `@src/shared/cue-path-utils.ts`:
- Around line 18-22: The detectSeparator function currently treats any path
containing a backslash as Windows-style; update detectSeparator to handle mixed
separators by checking for both '/' and '\' and returning the separator that
occurs first in the string (if both present), otherwise fall back to the
existing checks (drive-letter/backslash or includes backslash → '\\', else '/').
Change the implementation of detectSeparator and add a short comment above it
explaining the assumption about consistent separators within inputs and the
chosen first-occurrence tie-breaker; reference the function name detectSeparator
and any callers that rely on normalize/split behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6d95e1e3-1791-451a-936c-a7e09c197e66
📒 Files selected for processing (37)
src/__tests__/main/cue/cue-completion-chains.test.tssrc/__tests__/main/cue/cue-concurrency.test.tssrc/__tests__/main/cue/cue-engine.test.tssrc/__tests__/main/cue/cue-ipc-handlers.test.tssrc/__tests__/main/cue/cue-output-filter.test.tssrc/__tests__/main/cue/cue-run-manager.test.tssrc/__tests__/main/cue/cue-text-utils.test.tssrc/__tests__/main/cue/cue-yaml-error-paths.test.tssrc/__tests__/main/cue/cue-yaml-roundtrip.test.tssrc/__tests__/main/cue/pipeline-layout-store.test.tssrc/__tests__/renderer/components/CuePipelineEditor/panels/UpstreamSourcesPanel.test.tsxsrc/__tests__/renderer/components/CuePipelineEditor/utils/pipelineToYaml.upstream.test.tssrc/__tests__/renderer/components/CuePipelineEditor/utils/transitiveUpstream.test.tssrc/__tests__/renderer/hooks/useCueAutoDiscovery.test.tssrc/__tests__/shared/cue-path-utils.test.tssrc/main/cue/config/cue-config-repository.tssrc/main/cue/cue-completion-service.tssrc/main/cue/cue-engine.tssrc/main/cue/cue-executor.tssrc/main/cue/cue-fan-in-tracker.tssrc/main/cue/cue-output-filter.tssrc/main/cue/cue-recovery-service.tssrc/main/cue/cue-run-manager.tssrc/main/cue/cue-text-utils.tssrc/main/cue/pipeline-layout-store.tssrc/main/ipc/handlers/cue.tssrc/renderer/App.tsxsrc/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsxsrc/renderer/components/CuePipelineEditor/panels/UpstreamSourcesPanel.tsxsrc/renderer/components/CuePipelineEditor/utils/transitiveUpstream.tssrc/renderer/components/CueYamlEditor/CueYamlEditor.tsxsrc/renderer/hooks/cue/usePipelineLayout.tssrc/renderer/hooks/cue/usePipelineState.tssrc/renderer/hooks/useCueAutoDiscovery.tssrc/shared/cue-path-utils.tssrc/shared/cue-pipeline-types.tssrc/shared/settingsMetadata.ts
| export function mergeUpstreamForwarded( | ||
| forwardedOutputs: Record<string, string>, | ||
| upstreamForwarded: Record<string, string> | undefined, | ||
| sub: CueSubscription | ||
| ): Record<string, string> { | ||
| if (!upstreamForwarded) return forwardedOutputs; | ||
| const forwardSet = sub.forward_output_from ? new Set(sub.forward_output_from) : null; | ||
| const merged = { ...forwardedOutputs }; | ||
| for (const [name, output] of Object.entries(upstreamForwarded)) { | ||
| if (!forwardSet || forwardSet.has(name)) { | ||
| merged[name] = output; | ||
| } | ||
| } | ||
| return merged; |
There was a problem hiding this comment.
mergeUpstreamForwarded filters by name only — inconsistent with buildFilteredOutputs.
buildFilteredOutputs matches include_output_from / forward_output_from against either sessionName or sessionId (Lines 34, 46). mergeUpstreamForwarded only checks forwardSet.has(name) where name is the upstream map key (sessionName). If a user configures forward_output_from with session IDs, direct sources still forward correctly but upstream-forwarded entries silently drop. This is a subtle behavioral split between the fan-in/single-source path and the chain-propagation path.
Since the upstream map is name-keyed by construction, the simplest fix is to also include the id-based key (or preserve session-id metadata in the upstream payload). A pragmatic interim fix:
🛠️ Option A — accept both keyings if the session registry is reachable
export function mergeUpstreamForwarded(
forwardedOutputs: Record<string, string>,
upstreamForwarded: Record<string, string> | undefined,
sub: CueSubscription
): Record<string, string> {
if (!upstreamForwarded) return forwardedOutputs;
const forwardSet = sub.forward_output_from ? new Set(sub.forward_output_from) : null;
const merged = { ...forwardedOutputs };
for (const [name, output] of Object.entries(upstreamForwarded)) {
if (!forwardSet || forwardSet.has(name)) {
merged[name] = output;
}
}
return merged;
}At minimum, document the name-only contract in the JSDoc so callers know session-id entries in forward_output_from only match on the direct-source path.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function mergeUpstreamForwarded( | |
| forwardedOutputs: Record<string, string>, | |
| upstreamForwarded: Record<string, string> | undefined, | |
| sub: CueSubscription | |
| ): Record<string, string> { | |
| if (!upstreamForwarded) return forwardedOutputs; | |
| const forwardSet = sub.forward_output_from ? new Set(sub.forward_output_from) : null; | |
| const merged = { ...forwardedOutputs }; | |
| for (const [name, output] of Object.entries(upstreamForwarded)) { | |
| if (!forwardSet || forwardSet.has(name)) { | |
| merged[name] = output; | |
| } | |
| } | |
| return merged; | |
| export function mergeUpstreamForwarded( | |
| forwardedOutputs: Record<string, string>, | |
| upstreamForwarded: Record<string, string> | undefined, | |
| sub: CueSubscription | |
| ): Record<string, string> { | |
| if (!upstreamForwarded) return forwardedOutputs; | |
| const forwardSet = sub.forward_output_from ? new Set(sub.forward_output_from) : null; | |
| const merged = { ...forwardedOutputs }; | |
| for (const [name, output] of Object.entries(upstreamForwarded)) { | |
| if (!forwardSet || forwardSet.has(name)) { | |
| merged[name] = output; | |
| } | |
| } | |
| return merged; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/cue/cue-output-filter.ts` around lines 62 - 75,
mergeUpstreamForwarded only checks forward_output_from names but
buildFilteredOutputs matches either sessionName or sessionId; update
mergeUpstreamForwarded to accept the same matching semantics by testing both the
upstream map key (name) and the corresponding sessionId when deciding to include
an entry: resolve the sessionId for the upstream key (from the same session
registry/metadata used elsewhere or from the upstreamForwarded payload if you
add sessionId there) and change the inclusion check to forwardSet.has(name) ||
forwardSet.has(sessionId). Keep the rest of the function (merged copy) the same
so upstream-forwarded entries are not silently dropped when users configured
forward_output_from with session IDs.
Greptile SummaryThis PR hardens Maestro Cue for production across two commits: surfacing transitive forwarded sources in the pipeline editor's
Confidence Score: 4/5Safe to merge after addressing the missing sub.filter check in fan-in dispatch paths; all other changes are well-implemented and tested. The PR is a thorough hardening pass with 113 new tests and clean TSC. The one P1 finding — sub.filter not applied by the fan-in tracker before dispatching, while the single-source path gained that check in this same PR — is a real present defect for anyone using filter: clauses on multi-source subscriptions. All other findings are P2 style or narrow edge cases. src/main/cue/cue-fan-in-tracker.ts (handleCompletion dispatch and timeout-continue path missing sub.filter check) Important Files Changed
Sequence DiagramsequenceDiagram
participant Renderer as Renderer (useCueAutoDiscovery)
participant IPC as IPC (cue.ts handlers)
participant Engine as CueEngine
participant CompSvc as CueCompletionService
participant FanIn as CueFanInTracker
participant RunMgr as CueRunManager
participant DB as CueDB
Note over Renderer,Engine: Toggle ON serialized via toggleChainRef
Renderer->>IPC: cue:enable
IPC->>Engine: start()
Engine-->>Renderer: engineStarted activity event
Note over CompSvc,FanIn: agent.completed received
Engine->>CompSvc: notifyAgentCompleted(sessionId, data)
alt sources.length == 1
CompSvc->>CompSvc: sliceTailByChars + buildFilteredOutputs
CompSvc->>CompSvc: sub.filter check checkmark
CompSvc->>Engine: onDispatch to RunManager.execute
else fan-in sources.length > 1
CompSvc->>FanIn: handleCompletion
FanIn->>FanIn: sliceTailByChars + buildFilteredOutputs
FanIn->>FanIn: sub.filter check MISSING
FanIn->>Engine: dispatchSubscription to RunManager.execute
end
RunMgr->>DB: safeRecordCueEvent(running)
RunMgr->>RunMgr: onCueRun await
alt engine.stop() during run
RunMgr->>DB: safeUpdateCueEventStatus(finalStatus)
RunMgr-->>RunMgr: return no chain propagation
else run completes normally
RunMgr->>DB: updateCueEventStatus(status)
RunMgr->>Engine: onRunCompleted to notifyAgentCompleted chain
end
Engine-->>Renderer: engineStopped activity event
|
| @@ -15,6 +15,7 @@ import type { MainLogLevel } from '../../shared/logger-types'; | |||
| import type { CueEvent, CueRunResult, CueSettings, CueSubscription } from './cue-types'; | |||
| import { updateCueEventStatus, safeRecordCueEvent, safeUpdateCueEventStatus } from './cue-db'; | |||
| import { SOURCE_OUTPUT_MAX_CHARS } from './cue-fan-in-tracker'; | |||
There was a problem hiding this comment.
Import via re-export shim instead of canonical source
SOURCE_OUTPUT_MAX_CHARS is imported from cue-fan-in-tracker, which is only a re-export shim for backward compatibility with existing callers. cue-completion-service.ts (added in the same PR) correctly imports it directly from the canonical source. New call sites introduced in this PR should do the same.
| import { SOURCE_OUTPUT_MAX_CHARS } from './cue-fan-in-tracker'; | |
| import { SOURCE_OUTPUT_MAX_CHARS } from './cue-output-filter'; |
| if (projectView?.viewport) { | ||
| pendingSavedViewportRef.current = projectView.viewport; | ||
| } else if (savedLayout.viewport) { | ||
| pendingSavedViewportRef.current = savedLayout.viewport; | ||
| } |
There was a problem hiding this comment.
Fallback to top-level
viewport may apply another project's viewport
When projectView?.viewport is falsy (a perProject entry exists for this project but has no viewport), the code falls back to savedLayout.viewport. The top-level savedLayout.viewport is written on every save to reflect the currently active project (line 170). If the user last saved while on project B, then reopens on project A, and project A's perProject entry somehow has no viewport, it would restore project B's camera position. In practice this edge case is narrow (saves always write viewport), but consider guarding: else if (!projectView && savedLayout.viewport) to make the intent explicit.
Batch of targeted fixes from code review of the Cue hardening PR. Correctness - cue-path-utils normalize() now preserves the UNC `\\` prefix and treats `\\server\share` as a non-strippable root. Previously `/\\+/g` collapsed the leading `\\` to `\`, breaking every isDescendantOrEqual / computeCommonAncestorPath call for network shares. - cue-path-utils detectSeparator handles mixed-separator inputs by returning whichever separator occurs first — the prior check returned `\` whenever any backslash appeared, producing a wrong split on fixtures like `C:\proj/sub`. - cue-run-manager: the engine-stopped-mid-run log now carries the structured `runFinished` payload (runId, sessionId, subscriptionName, status) so renderer listeners observe the transition identically to a normal completion. Previously it was a 2-arg log, silent on the wire. - cue-run-manager: dropped the redundant `safeUpdateCueEventStatus` call after recording a stale queued event — the record already writes the final `timeout` status so the update was a no-op DB write. - IPC writeYaml now rejects both `.` and `..` segments (was only `..`), since `foo/.` and `foo` would otherwise produce two keep-set entries pointing at the same on-disk file. - pipeline-layout-store guards against JSON that parses to non-object values (null, array, primitive) — feeding those to migrateLegacyLayout would crash on property access. Returns null so the app recovers with a fresh layout. - pipeline-layout-store migration now strips the legacy top-level `selectedPipelineId` / `viewport` after folding them into `perProject`, so re-saves don't persist drifted duplicates. Observability - useCueAutoDiscovery captures enable/disable failures to Sentry with action context (maestro.cue.enable / maestro.cue.disable) alongside the existing toast + console.error. Documentation - cue-output-filter documents the load-bearing sessionName keying contract on buildFilteredOutputs (template-variable substitution depends on names) and the known name-only match limitation in mergeUpstreamForwarded so future callers understand the scope. Perf - AgentConfigPanel memoizes the owning-pipeline lookup passed to UpstreamSourcesPanel so downstream memoization isn't defeated by a fresh object reference on every render. Tests - cue-yaml-error-paths watcher test now writes a real cue.yaml to trigger the debounced callback before cleanup() — previously the test called cleanup immediately and never exercised the path the torn flag guards. - cue-path-utils test adds a static source-regex assertion that the module does not import Node builtins, plus five UNC regression tests (`\\server\share` paths). - pipeline-layout-store tests expanded: strip-legacy assertion, and a suite that rejects JSON parsing to null/[]/"string"/number/bool. - cue-ipc-handlers negative tests now also assert writeCueConfigFile is not called when prompt-file validation rejects, so a partial save can't slip through. Added a '.' segment rejection test. - cue-run-manager engine-stop-mid-run test asserts the structured runFinished payload. Validated: lint clean, full suite 25,490 passing (7 new), 0 regressions.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx (1)
190-190:⚠️ Potential issue | 🟡 MinorForwarded-only upstream case not reflected in layout reservation or input-prompt hint.
UpstreamSourcesPanelnow renders whenever there are direct or forwarded sources (the comment on lines 368–370 explicitly calls this out: "an agent with only forwarded upstream still sees the box"). However, two sibling UI decisions still gate purely onhasIncomingAgentEdges(direct edges only):
- Line 190:
minHeight: hasIncomingAgentEdges ? 100 : 0— for an agent whose upstream is 100% forwarded (no direct incoming agent edges), the prompts row won't shrink to reserve space for the panel card, so the card can push below the visible area more aggressively than intended.- Lines 251–262 / 268–275: the "(optional)" suffix and the "Use per-source variables from the card below" placeholder only appear when direct edges exist. A forwarded-only agent has
{{CUE_FORWARDED_*}}chips in the card below but no hint pointing the user at them.Consider deriving a combined "hasAnyUpstream" flag (e.g. from the same
computeTransitiveUpstreamresult, or by lifting that check to the parent) and using it for the layout reservation and the placeholder hint, so the three surfaces stay in sync.Also applies to: 250-275
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx` at line 190, The layout and hints should use a combined upstream flag instead of hasIncomingAgentEdges so forwarded-only upstreams are treated the same; derive hasAnyUpstream (e.g. from the existing computeTransitiveUpstream result or lift that computation up) to include either direct OR forwarded upstream sources, then replace usages of hasIncomingAgentEdges in the AgentConfigPanel: use hasAnyUpstream for the style minHeight decision (currently setting minHeight: hasIncomingAgentEdges ? 100 : 0) and for rendering the "(optional)" suffix and the "Use per-source variables from the card below" placeholder so the UpstreamSourcesPanel, layout reservation, and input prompt hint stay in sync with UpstreamSourcesPanel's visibility.
🧹 Nitpick comments (5)
src/main/cue/cue-output-filter.ts (1)
64-96: Known name-only asymmetry is now documented — acceptable interim.The JSDoc at Lines 71-80 explicitly calls out that
mergeUpstreamForwardedonly matches bysessionName(unlikebuildFilteredOutputs), explains the wire-format reason, and defers the proper fix. Given the UI generates YAML using names, this is a reasonable pre-production tradeoff.One optional hardening: if
forwardSetis set and a name inupstreamForwardedis filtered out, consider whether any entry already copied fromforwardedOutputsvia{ ...forwardedOutputs }(Line 89) should also be re-filtered for consistency. CurrentlybuildFilteredOutputsalready filtersforwardedOutputsby the sameforwardSet, so entries in the base map should already be allow-listed — so this is a no-op in practice, but worth a brief assertion in tests or a one-line comment noting the invariant relied upon here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/cue-output-filter.ts` around lines 64 - 96, Add a one-line comment or an assertion in mergeUpstreamForwarded (around the forwardSet/merged handling) documenting/ensuring the invariant that forwardedOutputs has already been filtered by buildFilteredOutputs and therefore any entries copied into merged from forwardedOutputs are already allow-listed; reference the symbols mergeUpstreamForwarded, forwardSet, merged, forwardedOutputs and buildFilteredOutputs to locate where to add the comment/assertion.src/__tests__/main/cue/cue-yaml-error-paths.test.ts (2)
99-106: Idempotent cleanup test doesn't assert onwatcher.close()behavior.
chokidar.close()returns a Promise; calling it twice typically resolves without throwing synchronously, so this test passes trivially. It would be stronger to also assert no unhandled rejection is produced (e.g.,awaitthe second call if the cleanup is refactored to return a Promise, or attach anunhandledrejectionguard). Non-blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts` around lines 99 - 106, The test currently calls cleanup() twice but doesn't assert on the async behavior of chokidar.close(); update the test to ensure no unhandled rejection occurs by either (A) refactoring watchCueConfigFile/cleanup to return a Promise that resolves after watcher.close() (reference watchCueConfigFile and its returned cleanup) and then awaiting the second call in the test (await cleanup(); await expect(cleanup()).resolves.not.toThrow()), or (B) without changing the API, attach a temporary unhandledrejection listener in the test around the second call (add/remove a globalThis.addEventListener('unhandledrejection') guard) and fail the test if any rejection is observed; in both cases assert that calling cleanup twice does not produce an unhandled rejection from chokidar.close().
80-88: Potential flakiness: fixed 50ms wait for chokidar watcher readiness.Relying on a 50ms sleep before writing the file can be flaky on slow CI. Prefer awaiting chokidar's
readyevent to make the test deterministic.Proposed refactor
- it('cleanup stops further onChange invocations even if debounce would fire', async () => { + it('cleanup stops further onChange invocations even if debounce would fire', async () => { // ... const onChange = vi.fn(); - const cleanup = watchCueConfigFile(projectRoot, onChange); - - // Give chokidar a moment to register the watcher paths. - await new Promise((resolve) => setTimeout(resolve, 50)); + // Pre-create the watched path so chokidar attaches deterministically, + // then start the watcher. (Alternatively, expose the chokidar instance + // so the test can await its 'ready' event.) + const maestroDir = path.join(projectRoot, '.maestro'); + fs.mkdirSync(maestroDir, { recursive: true }); + const cuePath = path.join(maestroDir, 'cue.yaml'); + fs.writeFileSync(cuePath, 'subscriptions: []', 'utf-8'); + const cleanup = watchCueConfigFile(projectRoot, onChange); + await new Promise((resolve) => setTimeout(resolve, 100)); - const maestroDir = path.join(projectRoot, '.maestro'); - fs.mkdirSync(maestroDir, { recursive: true }); - fs.writeFileSync(path.join(maestroDir, 'cue.yaml'), 'subscriptions: []', 'utf-8'); + fs.writeFileSync(cuePath, 'subscriptions:\n - name: changed', 'utf-8');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts` around lines 80 - 88, The test uses a fixed 50ms sleep to wait for chokidar to register watchers which can cause flakiness; replace the setTimeout wait with awaiting the chokidar watcher's 'ready' event (e.g., await new Promise(resolve => watcher.once('ready', resolve'))) before creating .maestro and calling fs.writeFileSync; update the test around the code that currently does setTimeout, ensure you reference the existing watcher instance used to watch projectRoot so the write happens only after the watcher's ready event has fired.src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx (1)
105-125: Optional: foldowningPipelineandagentPipelinesinto a single pass.Both memos walk
pipelinesindependently; they could share one traversal (findowningPipelinebynode.id, then filteragentPipelinesbysessionIdin the sameuseMemo, or deriveagentPipelineswithout re-scanning onceowningPipelineis known for the common single-pipeline case). Minor — only worth it ifpipelinesever gets large.Also, nit: the rationale comment on lines 105–108 attributes the memoization to
UpstreamSourcesPanel'spipelineprop stability, butagentPipelinesisn't passed to that panel — onlyowningPipelineis. The comment would fit better aboveowningPipeline.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx` around lines 105 - 125, Combine the two passes over pipelines into one useMemo that computes both owningPipeline (find by node.id) and agentPipelines (filter by data.sessionId) in a single traversal to avoid double-scanning; update the comment so the UpstreamSourcesPanel rationale sits above owningPipeline only. Locate the existing useMemo calls for owningPipeline and agentPipelines and replace them with a single useMemo returning { owningPipeline, agentPipelines } (or separate consts destructured from that result), using pipelines, node.id and data.sessionId as dependencies.src/main/cue/cue-run-manager.ts (1)
312-334: Redundant inner try/catch aroundsafeUpdateCueEventStatus.
safeUpdateCueEventStatusis already the non-throwing wrapper — it internally catches DB errors, logs a warning, and callscaptureException(seecue-db.ts). Thetry { safeUpdateCueEventStatus(...) } catch (finalizeErr) { captureException(...) }here is unreachable and just duplicates Sentry context. Either callsafeUpdateCueEventStatusbare, or callupdateCueEventStatusdirectly if you want the extracue:finalizeOutputRunStatuscontext.Proposed simplification
} finally { - try { - safeUpdateCueEventStatus(outputRunId, outputStatus); - } catch (finalizeErr) { - captureException(finalizeErr, { - operation: 'cue:finalizeOutputRunStatus', - outputRunId, - outputStatus, - }); - } + safeUpdateCueEventStatus(outputRunId, outputStatus); }Or, if the extra operation tag is valuable, swap to
updateCueEventStatusinside the try/catch so the catch can actually fire.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/cue-run-manager.ts` around lines 312 - 334, The inner try/catch around safeUpdateCueEventStatus is redundant because safeUpdateCueEventStatus already handles errors and calls captureException; remove the inner try/catch and call safeUpdateCueEventStatus(outputRunId, outputStatus) directly. If you need the extra Sentry operation tag 'cue:finalizeOutputRunStatus', replace the call with updateCueEventStatus(outputRunId, outputStatus) inside a try/catch that calls captureException(finalizeErr, { operation: 'cue:finalizeOutputRunStatus', outputRunId, outputStatus }).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx`:
- Line 190: The layout and hints should use a combined upstream flag instead of
hasIncomingAgentEdges so forwarded-only upstreams are treated the same; derive
hasAnyUpstream (e.g. from the existing computeTransitiveUpstream result or lift
that computation up) to include either direct OR forwarded upstream sources,
then replace usages of hasIncomingAgentEdges in the AgentConfigPanel: use
hasAnyUpstream for the style minHeight decision (currently setting minHeight:
hasIncomingAgentEdges ? 100 : 0) and for rendering the "(optional)" suffix and
the "Use per-source variables from the card below" placeholder so the
UpstreamSourcesPanel, layout reservation, and input prompt hint stay in sync
with UpstreamSourcesPanel's visibility.
---
Nitpick comments:
In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts`:
- Around line 99-106: The test currently calls cleanup() twice but doesn't
assert on the async behavior of chokidar.close(); update the test to ensure no
unhandled rejection occurs by either (A) refactoring watchCueConfigFile/cleanup
to return a Promise that resolves after watcher.close() (reference
watchCueConfigFile and its returned cleanup) and then awaiting the second call
in the test (await cleanup(); await expect(cleanup()).resolves.not.toThrow()),
or (B) without changing the API, attach a temporary unhandledrejection listener
in the test around the second call (add/remove a
globalThis.addEventListener('unhandledrejection') guard) and fail the test if
any rejection is observed; in both cases assert that calling cleanup twice does
not produce an unhandled rejection from chokidar.close().
- Around line 80-88: The test uses a fixed 50ms sleep to wait for chokidar to
register watchers which can cause flakiness; replace the setTimeout wait with
awaiting the chokidar watcher's 'ready' event (e.g., await new Promise(resolve
=> watcher.once('ready', resolve'))) before creating .maestro and calling
fs.writeFileSync; update the test around the code that currently does
setTimeout, ensure you reference the existing watcher instance used to watch
projectRoot so the write happens only after the watcher's ready event has fired.
In `@src/main/cue/cue-output-filter.ts`:
- Around line 64-96: Add a one-line comment or an assertion in
mergeUpstreamForwarded (around the forwardSet/merged handling)
documenting/ensuring the invariant that forwardedOutputs has already been
filtered by buildFilteredOutputs and therefore any entries copied into merged
from forwardedOutputs are already allow-listed; reference the symbols
mergeUpstreamForwarded, forwardSet, merged, forwardedOutputs and
buildFilteredOutputs to locate where to add the comment/assertion.
In `@src/main/cue/cue-run-manager.ts`:
- Around line 312-334: The inner try/catch around safeUpdateCueEventStatus is
redundant because safeUpdateCueEventStatus already handles errors and calls
captureException; remove the inner try/catch and call
safeUpdateCueEventStatus(outputRunId, outputStatus) directly. If you need the
extra Sentry operation tag 'cue:finalizeOutputRunStatus', replace the call with
updateCueEventStatus(outputRunId, outputStatus) inside a try/catch that calls
captureException(finalizeErr, { operation: 'cue:finalizeOutputRunStatus',
outputRunId, outputStatus }).
In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx`:
- Around line 105-125: Combine the two passes over pipelines into one useMemo
that computes both owningPipeline (find by node.id) and agentPipelines (filter
by data.sessionId) in a single traversal to avoid double-scanning; update the
comment so the UpstreamSourcesPanel rationale sits above owningPipeline only.
Locate the existing useMemo calls for owningPipeline and agentPipelines and
replace them with a single useMemo returning { owningPipeline, agentPipelines }
(or separate consts destructured from that result), using pipelines, node.id and
data.sessionId as dependencies.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0fd34b6e-9285-4b4a-bca6-299405dd1670
📒 Files selected for processing (12)
src/__tests__/main/cue/cue-ipc-handlers.test.tssrc/__tests__/main/cue/cue-run-manager.test.tssrc/__tests__/main/cue/cue-yaml-error-paths.test.tssrc/__tests__/main/cue/pipeline-layout-store.test.tssrc/__tests__/shared/cue-path-utils.test.tssrc/main/cue/cue-output-filter.tssrc/main/cue/cue-run-manager.tssrc/main/cue/pipeline-layout-store.tssrc/main/ipc/handlers/cue.tssrc/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsxsrc/renderer/hooks/useCueAutoDiscovery.tssrc/shared/cue-path-utils.ts
✅ Files skipped from review due to trivial changes (1)
- src/tests/main/cue/pipeline-layout-store.test.ts
🚧 Files skipped from review as they are similar to previous changes (7)
- src/main/ipc/handlers/cue.ts
- src/renderer/hooks/useCueAutoDiscovery.ts
- src/tests/main/cue/cue-ipc-handlers.test.ts
- src/main/cue/pipeline-layout-store.ts
- src/shared/cue-path-utils.ts
- src/tests/main/cue/cue-run-manager.test.ts
- src/tests/shared/cue-path-utils.test.ts
Smaller batch of follow-up fixes from the second review round. AgentConfigPanel - Combine the two separate `useMemo` scans of `pipelines` into a single pass that returns both `owningPipeline` and `agentPipelines`, halving the work per render. - Derive `hasAnyUpstream` (direct OR forwarded) from a memoized `computeTransitiveUpstream` result so the layout `minHeight` and the input-prompt "use per-source variables" hint stay in sync with UpstreamSourcesPanel's visibility. Previously an agent with only forwarded upstreams reserved `minHeight: 0` and showed the "no upstream" placeholder, even though the upstream card was visible below. The `(optional)` suffix stays tied to direct-with- includeUpstreamOutput — that semantic (auto-append) only applies to direct edges. cue-run-manager - Replace the `safeUpdateCueEventStatus` + inner try/catch in the output-prompt finalize path with `updateCueEventStatus` + our own catch that tags the Sentry `operation` as `cue:finalizeOutputRunStatus`. The prior code double-handled errors (the `safe*` wrapper already reports to Sentry as `safeUpdateCueEventStatus`); using the raw function here keeps a single, specific operation tag so this failure mode is distinguishable in Sentry from normal status updates. cue-output-filter - Document the invariant in `mergeUpstreamForwarded`: entries copied from `forwardedOutputs` into `merged` are already allow-listed by `buildFilteredOutputs`, so only the upstream entries need re-filtering. cue-config-repository - Add an opt-in `onReady` hook to `watchCueConfigFile` that fires when chokidar completes its initial scan. Production callers ignore it; tests use it to eliminate the 50ms sleep that raced with chokidar's watcher registration on slow CI runners (which made the torn-flag assertion trivially pass when no change event ever fired). Tests - watcher tests: rewrite the debounce-race assertion to await the `onReady` hook instead of sleeping, so the subsequent write deterministically schedules a debounced callback before cleanup. - cleanup-idempotence test now also attaches an `unhandledRejection` listener around the second call and fails on any async rejection from chokidar.close() — catching failures the prior sync-only `not.toThrow()` assertion would miss. Validated: lint clean, full suite 25,490 passing, 0 regressions.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
src/__tests__/main/cue/cue-yaml-error-paths.test.ts (2)
176-179: Strengthen the negative-gap assertion.The test only asserts a log was emitted, but the intent (per the comment at Line 131-132) is that reconciliation is skipped on a backward clock. Also assert that
reconcileSession(and any side effects like heartbeat re-recording) was not invoked, so a future regression that logs-and-still-reconciles is caught.expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward')); + // Intent is skip-on-backward-clock, not log-and-continue. + expect(reconcileSession).not.toHaveBeenCalled();(where
reconcileSessionis hoisted as avi.fn()rather than inlined at Line 166.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts` around lines 176 - 179, The test should also assert that no reconciliation or heartbeat side-effects occur when the clock moves backward: after calling service.detectSleepAndReconcile(), add negative assertions that the hoisted mock reconcileSession (reconcileSession) was not called and any heartbeat re-recording mocks were not invoked, in addition to the existing onLog check for 'Clock moved backward', so regressions that log but still reconcile are caught.
140-146: Prefervi.spyOn(Date, 'now')over direct reassignment.Reassigning
Date.nowworks but bypasses Vitest's mock tracking and leaves cleanup manual.vi.spyOn(Date, 'now').mockReturnValue(1000)withvi.restoreAllMocks()inafterEachis idiomatic and safer against an early test failure between save and restore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts` around lines 140 - 146, Replace the manual save/restore of Date.now (originalDateNow variable and direct Date.now reassignment in beforeEach/afterEach) with Vitest's spy API: use vi.spyOn(Date, 'now').mockReturnValue(...) where the tests set the fake time, and call vi.restoreAllMocks() in afterEach to restore Date.now; remove the originalDateNow saving/restoring logic and vi.resetModules() if not needed for other mocks.src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx (1)
140-147: Minor:computeTransitiveUpstreamis computed twice per render.
forwardedUpstreamhere callscomputeTransitiveUpstream(owningPipeline, node.id), andUpstreamSourcesPanelcalls it again internally on the same(pipeline, targetNodeId)pair. The function is pure and graphs are small, so this isn't a correctness or meaningful perf concern — but if you want a single source of truth you could lift the full result here and pass it (or a memoizedforwardedSources) into the panel, letting the panel keep its internal fallback for the storybook/no-pipeline case. Safe to defer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx` around lines 140 - 147, You are calling computeTransitiveUpstream twice for the same (owningPipeline, node.id) — once here to build forwardedUpstream and again inside UpstreamSourcesPanel; lift the full transitive result into this component (e.g., compute once as forwardedFull = useMemo(() => owningPipeline ? computeTransitiveUpstream(owningPipeline, node.id) : [], [owningPipeline, node.id])) and then pass a filtered/memoized version (forwardedUpstream or forwardedSources) into UpstreamSourcesPanel so the panel uses the passed-in data instead of recomputing; keep the panel’s internal fallback for the storybook/no-pipeline case.src/main/cue/cue-run-manager.ts (1)
286-307: Minor: hoist the shared slice to avoid computing it twice.
sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS)is evaluated at both line 288 and line 307 with identical inputs. Both produce the same string; caching once keeps them in sync if the cap ever changes and shaves a second scan overresult.stdout.Proposed refactor
+ const truncatedSourceOutput = sliceHeadByChars( + result.stdout, + SOURCE_OUTPUT_MAX_CHARS + ); const outputRunId = crypto.randomUUID(); const outputEvent: CueEvent = { ...event, id: crypto.randomUUID(), payload: { ...event.payload, - sourceOutput: sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS), + sourceOutput: truncatedSourceOutput, outputPromptPhase: true, }, }; ... - const contextPrompt = `${outputPrompt}\n\n---\n\nContext from completed task:\n${sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS)}`; + const contextPrompt = `${outputPrompt}\n\n---\n\nContext from completed task:\n${truncatedSourceOutput}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/cue-run-manager.ts` around lines 286 - 307, Hoist the repeated call to sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS) into a single local variable (e.g., const slicedOutput = sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS)) before constructing outputEvent and contextPrompt, then use slicedOutput when building outputEvent.payload (the JSON.stringify payload passed to safeRecordCueEvent) and when composing contextPrompt; keep all other logic (setting run.processRunId, outputRunId, etc.) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts`:
- Around line 169-179: The test calls service.init() which starts a recurring
heartbeat timer but never stops it; to prevent leaking the interval add a
deterministic shutdown of the service after the test. Modify the test to call
service.shutdown() in a teardown (e.g., an afterEach that calls
service.shutdown()) or wrap the invoke/assert in a try/finally and call
service.shutdown() in the finally block so the heartbeat interval started by
service.init() is always cleared.
In `@src/main/cue/cue-run-manager.ts`:
- Around line 343-345: The early-return that checks activeRuns.has(runId) causes
the parent run row to remain `running` when the engine is reset mid-output;
mimic the earlier fix at lines 247–269 by capturing whether the parent was
previously active before returning and ensure the outer cleanup still executes
updateCueEventStatus(runId, result.status) for the parent. Concretely, in the
function handling runId, record a flag like parentWasActive =
activeRuns.has(runId) before the early return, avoid skipping outer finally
logic when parentWasActive was true (so the outer finally still performs cleanup
and calls updateCueEventStatus(runId, result.status)), and keep the existing
inner try/finally that finalizes outputRunId unchanged; reference activeRuns,
runId, outputRunId, the inner try/finally (314–341) and the outer finally (427)
plus updateCueEventStatus(runId, result.status) when applying this change.
---
Nitpick comments:
In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts`:
- Around line 176-179: The test should also assert that no reconciliation or
heartbeat side-effects occur when the clock moves backward: after calling
service.detectSleepAndReconcile(), add negative assertions that the hoisted mock
reconcileSession (reconcileSession) was not called and any heartbeat
re-recording mocks were not invoked, in addition to the existing onLog check for
'Clock moved backward', so regressions that log but still reconcile are caught.
- Around line 140-146: Replace the manual save/restore of Date.now
(originalDateNow variable and direct Date.now reassignment in
beforeEach/afterEach) with Vitest's spy API: use vi.spyOn(Date,
'now').mockReturnValue(...) where the tests set the fake time, and call
vi.restoreAllMocks() in afterEach to restore Date.now; remove the
originalDateNow saving/restoring logic and vi.resetModules() if not needed for
other mocks.
In `@src/main/cue/cue-run-manager.ts`:
- Around line 286-307: Hoist the repeated call to
sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS) into a single local
variable (e.g., const slicedOutput = sliceHeadByChars(result.stdout,
SOURCE_OUTPUT_MAX_CHARS)) before constructing outputEvent and contextPrompt,
then use slicedOutput when building outputEvent.payload (the JSON.stringify
payload passed to safeRecordCueEvent) and when composing contextPrompt; keep all
other logic (setting run.processRunId, outputRunId, etc.) unchanged.
In `@src/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx`:
- Around line 140-147: You are calling computeTransitiveUpstream twice for the
same (owningPipeline, node.id) — once here to build forwardedUpstream and again
inside UpstreamSourcesPanel; lift the full transitive result into this component
(e.g., compute once as forwardedFull = useMemo(() => owningPipeline ?
computeTransitiveUpstream(owningPipeline, node.id) : [], [owningPipeline,
node.id])) and then pass a filtered/memoized version (forwardedUpstream or
forwardedSources) into UpstreamSourcesPanel so the panel uses the passed-in data
instead of recomputing; keep the panel’s internal fallback for the
storybook/no-pipeline case.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: aa0eefa9-7de6-44dd-a9ff-bcdc3683137f
📒 Files selected for processing (5)
src/__tests__/main/cue/cue-yaml-error-paths.test.tssrc/main/cue/config/cue-config-repository.tssrc/main/cue/cue-output-filter.tssrc/main/cue/cue-run-manager.tssrc/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- src/main/cue/config/cue-config-repository.ts
| // Init to open the DB path. | ||
| service.init(); | ||
|
|
||
| // Heartbeat is 10 seconds in the future — gapMs will be negative. | ||
| Date.now = vi.fn(() => 1000); | ||
| mockGetLastHeartbeat.mockReturnValue(11000); | ||
|
|
||
| service.detectSleepAndReconcile(); | ||
|
|
||
| expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward')); | ||
| }); |
There was a problem hiding this comment.
Call service.shutdown() to avoid leaking the heartbeat interval.
service.init() starts a recurring heartbeat timer (every 5s per the recovery service). The test never calls shutdown(), so the interval outlives the test and can fire against a closed DB mock in later tests, producing flaky noise. Add a teardown — ideally in afterEach or a try/finally around the assertion.
Proposed fix
service.init();
// Heartbeat is 10 seconds in the future — gapMs will be negative.
Date.now = vi.fn(() => 1000);
mockGetLastHeartbeat.mockReturnValue(11000);
- service.detectSleepAndReconcile();
-
- expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward'));
+ try {
+ service.detectSleepAndReconcile();
+ expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward'));
+ } finally {
+ service.shutdown();
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Init to open the DB path. | |
| service.init(); | |
| // Heartbeat is 10 seconds in the future — gapMs will be negative. | |
| Date.now = vi.fn(() => 1000); | |
| mockGetLastHeartbeat.mockReturnValue(11000); | |
| service.detectSleepAndReconcile(); | |
| expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward')); | |
| }); | |
| // Init to open the DB path. | |
| service.init(); | |
| // Heartbeat is 10 seconds in the future — gapMs will be negative. | |
| Date.now = vi.fn(() => 1000); | |
| mockGetLastHeartbeat.mockReturnValue(11000); | |
| try { | |
| service.detectSleepAndReconcile(); | |
| expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward')); | |
| } finally { | |
| service.shutdown(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts` around lines 169 - 179,
The test calls service.init() which starts a recurring heartbeat timer but never
stops it; to prevent leaking the interval add a deterministic shutdown of the
service after the test. Modify the test to call service.shutdown() in a teardown
(e.g., an afterEach that calls service.shutdown()) or wrap the invoke/assert in
a try/finally and call service.shutdown() in the finally block so the heartbeat
interval started by service.init() is always cleared.
…se reset
Engine correctness
- cue-run-manager: the second early-return in the output-prompt path
(when activeRuns no longer has the runId after the inner finally
finalized the OUTPUT row) was leaving the PARENT runId stuck at
`running` in the DB. Mirrors the earlier fix for the pre-output-
prompt case: now calls safeUpdateCueEventStatus on the parent with
result.status and emits the structured runFinished payload so the
renderer observes the transition identically to a normal completion.
Micro-cleanups
- Hoist the duplicate sliceHeadByChars(result.stdout, SOURCE_OUTPUT_MAX_CHARS)
call into a single `slicedOutput` local used by both the recorded
payload and the context prompt (was computed twice with identical
args each output-prompt run).
- AgentConfigPanel now passes the pre-computed `forwardedUpstream`
list into UpstreamSourcesPanel so computeTransitiveUpstream runs
once per render instead of twice. UpstreamSourcesPanel keeps its
internal pipeline+targetNodeId fallback so storybook/test callers
without the list still render correctly.
Tests
- cue-yaml-error-paths:
- Add negative assertions to the clock-moved-backward test so a
regression that logs the warning but still reconciles or rewrites
the heartbeat is caught (reconcileSession + mockRecordHeartbeat
must NOT be called).
- Replace manual Date.now save/restore with vi.spyOn(Date, 'now')
+ vi.restoreAllMocks() in afterEach. Drops the bookkeeping
variable and aligns with the Vitest pattern used elsewhere.
- cue-run-manager: new regression gate "reset during output-prompt
phase: finalizes parent run status" exercising the new branch.
Deliberately skipped: one review finding claimed service.init() leaks
a heartbeat interval. Verified against cue-recovery-service.ts — init()
only calls initCueDb + pruneCueEvents, no timer is started there. The
heartbeat timer lives in cue-heartbeat.ts (separate service the test
doesn't touch). No leak to fix; the test's mocks make shutdown() a
no-op anyway.
Validated: lint clean, full suite 25,491 passing, +1 new test, 0 regressions.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/cue/cue-run-manager.ts (1)
446-449:⚠️ Potential issue | 🟡 MinorCatch block skips DB finalization when
onCueRunthrows after engine reset.The explicit
!activeRuns.has(runId)branches at L247–269 and L348–370 correctly finalize the parent DB row whenonCueRunreturns a failed result after reset, but the symmetric case whereonCueRunthrows hits the early return at L447 and skips the outerfinallycleanup (gated onactiveRuns.has(runId)at L455), leaving the parentcue_eventsrow stuck atrunning.This occurs when:
- Main
onCueRunrejects (e.g., AbortError) while a concurrentreset()has already clearedactiveRuns→ L447 returns → outer finally skipped.- Output-phase
onCueRunrejects after reset → inner finally finalizesoutputRunId, exception propagates to this catch → same leak for the parent.
stopRun()is safe because it writes'stopped'itself;reset()does not touch the DB, so the error-path guard is load-bearing. Add the same DB finalization logic used at L247–252 and L357–362 to handle the throw case:Proposed fix
} catch (error) { if (!activeRuns.has(runId)) { + result.status = 'failed'; + result.stderr = error instanceof Error ? error.message : String(error); + safeUpdateCueEventStatus(runId, result.status); + deps.onLog( + 'cue', + `[CUE] Run "${subscriptionName}" threw after engine stop — status recorded (${result.status}), error discarded`, + { + type: 'runFinished', + runId, + sessionId, + subscriptionName, + status: result.status, + } + ); return; } result.status = 'failed'; result.stderr = error instanceof Error ? error.message : String(error);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/cue/cue-run-manager.ts` around lines 446 - 449, The catch(error) path currently returns early when !activeRuns.has(runId), which skips the outer cleanup and leaves the parent cue_events row in 'running'; instead, when catching an exception from onCueRun (including AbortError after reset), reuse the same DB finalization logic used in the earlier branches that handle a failed result: update the parent cue_events row for runId (set status to failed/stopped and write the error/ended timestamp) before returning. Locate the catch block that checks activeRuns.has(runId) and mirror the update/cleanup code found in the earlier branches around the explicit onCueRun failure handling (the blocks that finalize the parent DB row after reset) so the parent row is always finalized even when onCueRun throws.
♻️ Duplicate comments (1)
src/__tests__/main/cue/cue-yaml-error-paths.test.ts (1)
172-186:⚠️ Potential issue | 🟡 MinorHeartbeat interval still leaks — call
service.shutdown()in teardown.
service.init()starts a recurring heartbeat timer that is never cleared by this test. It can fire later against the mockedcue-db(or aftervi.resetModules()swaps it out), causing flaky noise in adjacent tests. Wrap the assertion intry/finally(or add anafterEach) that callsservice.shutdown().Proposed fix
- service.init(); - - // Heartbeat is 10 seconds in the future — gapMs will be negative. - vi.spyOn(Date, 'now').mockReturnValue(1000); - mockGetLastHeartbeat.mockReturnValue(11000); - - service.detectSleepAndReconcile(); - - expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward')); - expect(reconcileSession).not.toHaveBeenCalled(); - expect(mockRecordHeartbeat).not.toHaveBeenCalled(); + service.init(); + try { + vi.spyOn(Date, 'now').mockReturnValue(1000); + mockGetLastHeartbeat.mockReturnValue(11000); + + service.detectSleepAndReconcile(); + + expect(onLog).toHaveBeenCalledWith('cue', expect.stringContaining('Clock moved backward')); + expect(reconcileSession).not.toHaveBeenCalled(); + expect(mockRecordHeartbeat).not.toHaveBeenCalled(); + } finally { + service.shutdown(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts` around lines 172 - 186, The test calls service.init() which starts a recurring heartbeat timer and never clears it, causing flaky cross-test interference; update the test to ensure the timer is stopped by calling service.shutdown() in a finally block or an afterEach teardown so the heartbeat can't fire after assertions — locate the test using service.init(), the call to service.detectSleepAndReconcile(), and add a try/finally (or afterEach) that calls service.shutdown() to guarantee cleanup (ensuring reconcileSession and mockRecordHeartbeat assertions run before shutdown in the finally).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/main/cue/cue-run-manager.ts`:
- Around line 446-449: The catch(error) path currently returns early when
!activeRuns.has(runId), which skips the outer cleanup and leaves the parent
cue_events row in 'running'; instead, when catching an exception from onCueRun
(including AbortError after reset), reuse the same DB finalization logic used in
the earlier branches that handle a failed result: update the parent cue_events
row for runId (set status to failed/stopped and write the error/ended timestamp)
before returning. Locate the catch block that checks activeRuns.has(runId) and
mirror the update/cleanup code found in the earlier branches around the explicit
onCueRun failure handling (the blocks that finalize the parent DB row after
reset) so the parent row is always finalized even when onCueRun throws.
---
Duplicate comments:
In `@src/__tests__/main/cue/cue-yaml-error-paths.test.ts`:
- Around line 172-186: The test calls service.init() which starts a recurring
heartbeat timer and never clears it, causing flaky cross-test interference;
update the test to ensure the timer is stopped by calling service.shutdown() in
a finally block or an afterEach teardown so the heartbeat can't fire after
assertions — locate the test using service.init(), the call to
service.detectSleepAndReconcile(), and add a try/finally (or afterEach) that
calls service.shutdown() to guarantee cleanup (ensuring reconcileSession and
mockRecordHeartbeat assertions run before shutdown in the finally).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 24327905-d794-410f-b4dc-551f6e782a60
📒 Files selected for processing (5)
src/__tests__/main/cue/cue-run-manager.test.tssrc/__tests__/main/cue/cue-yaml-error-paths.test.tssrc/main/cue/cue-run-manager.tssrc/renderer/components/CuePipelineEditor/panels/AgentConfigPanel.tsxsrc/renderer/components/CuePipelineEditor/panels/UpstreamSourcesPanel.tsx
✅ Files skipped from review due to trivial changes (1)
- src/renderer/components/CuePipelineEditor/panels/UpstreamSourcesPanel.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- src/tests/main/cue/cue-run-manager.test.ts
Summary
Two-part pre-release hardening of the Maestro Cue feature:
Forwarded sources on downstream agents — the Upstream Sources panel now lists direct AND transitively-forwarded inputs in one unified view, so users can see every variable flowing into an agent. Also fixes a latent bug where the single-source completion path silently ignored
include_output_from/forward_output_from(filters only applied to fan-in, so 1-source chain toggles no-op'd).Production hardening pass — covers the failure modes that would have surfaced in user hands but don't in dev: feature-flag toggle races, engine/DB state when runs complete after
stop(), UTF-8 corruption when forwarding agent output, per-project layout memory, and prompt-file path-traversal hardening.What changed
Forwarded sources (commit
df96e989)UpstreamSourcesPanelextracted fromAgentConfigPanel; renders direct and forwarded rows with per-source template-variable chips.transitiveUpstreampure graph walk honorsforwardOutputflags, dedupes shared sources, terminates on cycles.cue-output-filterconsolidates include/forward semantics; both fan-in and 1-source completion paths now route throughbuildFilteredOutputs+mergeUpstreamForwarded.Feature flag & toggle lifecycle (commit
bbca8aa3)maestroCue(plussymphony,usageStats) toencoreFeaturesdefault so fresh installs don't spuriously calldisable()on first render.enable/disableIPC on a single Promise chain — rapid ON→OFF→ON toggles now apply in user-triggered order, not IPC-response order.console.erroronly).engineStarted/engineStoppedactivity-update events so the renderer refreshes queue counters immediately instead of waiting for the 10s poll.Engine correctness
onCueRunresolves afterengine.stop()— prior behavior left runs stuck atrunningin the activity log forever.timeoutentries withdroppedFromQueuepayload so users can see why their run never fired.cue-text-utils.ts(sliceTailByChars/sliceHeadByChars); swapped in at allSOURCE_OUTPUT_MAX_CHARScall sites to avoid splitting surrogate pairs when forwarding agent output.tornflag on YAML watcher so events slipping pastwatcher.close()can't re-trigger refresh on a torn-down session.Per-project pipeline layout
PipelineLayoutStategains aperProjectmap keyed by project root (selectedPipelineId+viewport). Save under the selected pipeline's project root, preserving sibling projects' state.perProjectunder a__default__key; saves always stampversion: 2.UI consistency
selectedNodeId/selectedEdgeIdwhen the owning pipeline disappears (prevents stale composite IDs rendering empty panels).refreshSessionfailures in CueYamlEditor — file IS on disk, user needs to know the engine didn't reload...segments beforepath.resolve.Test plan
Full lint clean (tsc across 3 configs); full suite 25,483 passing, 0 regressions. 113 new tests across both commits covering:
forwardOutputflag)include_output_from/forward_output_from(regression gate).., non-.md, Windows separators, outside-prompts)Manual verification to run before merge
npm run dev— enable maestroCue, create a pipeline with 3+ agents and a fan-out relay; confirm downstream agent panel shows forwarded sources.running).agent.completedchain → confirm downstream receives intact UTF-8.Summary by CodeRabbit
New Features
Bug Fixes
Improvements