feat: store AI response output on ai.request and search.delegate spans (#529)#530
feat: store AI response output on ai.request and search.delegate spans (#529)#530
Conversation
#529) Add onResult callback parameter to withSpan() that enriches spans with result data before they close. Captures ai.output and ai.output_length on ai.request spans, and search.delegate.output and search.delegate.output_length on search.delegate spans. Adds truncateForSpan() helper that preserves head + tail of long text (first ~2K chars and last ~2K chars with omitted count) instead of just truncating from the front, giving better context in traces. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
SummaryThis PR enhances telemetry spans by capturing AI response output on Files Changed Analysis
Total: +155 additions, -8 deletions across 5 files Architecture & Impact AssessmentWhat This PR Accomplishes
Key Technical Changes
Affected System Componentsflowchart TD
subgraph Telemetry Layer
ST[simpleTelemetry.js]
TFS[truncateForSpan helper]
WS[withSpan API]
end
subgraph AI Layer
PA[ProbeAgent.js]
AIR[ai.request spans]
end
subgraph Search Layer
VJ[vercel.js]
SD[search.delegate spans]
end
ST --> TFS
ST --> WS
PA --> TFS
PA --> WS
PA --> AIR
VJ --> TFS
VJ --> WS
VJ --> SD
WS -->|onResult callback| AIR
WS -->|onResult callback| SD
Scope Discovery & Context ExpansionDirect Impact
Related Areas to Consider
Test Coverage
References
Metadata
Powered by Visor from Probelabs Last updated: 2026-03-18T16:10:41.454Z | Triggered by: pr_opened | Commit: d521d3c 💡 TIP: You can chat with Visor using |
Security Issues (1)
Performance Issues (1)
Quality Issues (1)
Powered by Visor from Probelabs Last updated: 2026-03-18T16:07:23.918Z | Triggered by: pr_opened | Commit: d521d3c 💡 TIP: You can chat with Visor using |
Summary
onResultcallback parameter towithSpan()insimpleTelemetry.js— enriches spans with result data before they closeai.requestspans now captureai.output(the AI's text response) andai.output_lengthsearch.delegatespans now capturesearch.delegate.output(the delegate's synthesized answer) andsearch.delegate.output_lengthtruncateForSpan()helper that preserves head + tail of long text (first ~2K and last ~2K chars) instead of front-only truncation, giving better context in tracesai.inputon ai.request spansFixes #529
Test plan
truncateForSpanunit tests (short text, falsy input, head+tail preservation, omitted count accuracy, custom maxLen)withSpanonResult tests (callback invocation, error resilience, no-call-on-error, truncation behavior)search-delegate.test.jsto expect the new onResult parameter🤖 Generated with Claude Code