Skip to content

fix: Genie plugin — handle in-progress messages on reload and fix message overflow#196

Open
calvarjorge wants to merge 2 commits intomainfrom
jorge.calvar/genie_improvements
Open

fix: Genie plugin — handle in-progress messages on reload and fix message overflow#196
calvarjorge wants to merge 2 commits intomainfrom
jorge.calvar/genie_improvements

Conversation

@calvarjorge
Copy link
Copy Markdown
Contributor

@calvarjorge calvarjorge commented Mar 18, 2026

Summary

  • Bug fix: in-progress messages on page reload — When reloading a page while a Genie message is still loading, the history endpoint returned the in-progress message which the frontend silently dropped (no attachments yet), leaving the UI broken with no way to recover. Added a new single-message polling endpoint (GET /:alias/conversations/:conversationId/messages/:messageId) that SSE-streams status updates until the message completes. The frontend now detects pending messages after history load and polls via this endpoint, reusing the existing processStreamEvent pipeline.

  • Bug fix: message bubbles overflowing the chat viewport — Response bubbles extended past the visible area.

Before
image

After
image

When reloading a page while a Genie message is still loading, the
history endpoint returned the in-progress message which the frontend
silently dropped (no attachments yet). This left the UI broken with
no way to recover.

Add a single-message polling endpoint (GET /:alias/conversations/
:conversationId/messages/:messageId) that SSE-streams status updates
until the message completes. The frontend now detects pending messages
after history load and polls via this endpoint, reusing the existing
processStreamEvent pipeline.

Also fix wide query results overflowing beyond the message bubble by
switching to overflow-x-auto and adding min-w-0 constraints.

Signed-off-by: Jorge Calvar <jorge.calvar@databricks.com>
Message bubbles extended past the visible area for two reasons:

1. The content column used items-start/items-end for alignment, which
   caused Cards to size based on their content width instead of
   stretching to the column. Wide content (tables, long text) pushed
   Cards wider than the 80% column, with overflow clipped visually
   but text wrapping at the wider intrinsic width.

2. Radix ScrollArea inserts a wrapper div with display:table that
   grows to fit content. This made the entire scroll container wider
   than the viewport, so percentage-based widths resolved against
   the wider container.

Fix:
- Remove items-start/items-end from the content column
- Add w-full to all Cards so they always match the column width
- Override Radix's table wrapper to display:block via targeted
  selector on the scroll area viewport
- Add break-words to markdown content and make markdown tables
  scrollable within the bubble

Signed-off-by: Jorge Calvar <jorge.calvar@databricks.com>
@pkosiec pkosiec self-assigned this Mar 31, 2026
Copy link
Copy Markdown
Member

@pkosiec pkosiec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, works well

Image

COMPLETED: "Done",
};

const TERMINAL_STATUSES = new Set(["COMPLETED", "FAILED"]);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have duplication in two files for those statuses

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some P2 stuff from agentic review:

1. hasAttachments guard causes FAILED messages to leave a phantom placeholder
  use-genie-chat.ts:236 — In processStreamEvent, the condition last.id === "" && hasAttachments means if the API returns a message_result for a FAILED message
  (no attachments), the empty placeholder (id === "") is never replaced. The user is stuck with a phantom empty bubble. Fix: Remove the hasAttachments guard —
  when last.id === "", always replace the placeholder:
  if (last.id === "") {
    return [...prev.slice(0, -1), item];
  }

  2. No abort signal propagation in streamGetMessage polling loop
  client.ts:389-447 — The while(true) polling loop doesn't accept or check an AbortSignal. When the SSE client disconnects, the server keeps polling the
  Databricks API for up to 120s (~40 API calls). The executeStream framework passes a signal via handler(combinedSignal), but the genie.ts handler doesn't
  forward it to streamGetMessage. Fix: Accept signal?: AbortSignal in options, check signal?.aborted before each poll, and use signal-aware sleep.

  3. Double timeout: executeStream interceptor vs. polling loop deadline
  genie.ts:213-214 + client.ts:396-398 — Both the interceptor and the generator have independent 120s timeouts racing each other. If the interceptor fires first,
   the generator gets an unclean termination. Fix: Either remove the internal deadline from streamGetMessage (relying on the signal from the interceptor), or set
   the interceptor timeout slightly longer.

  4. pollPendingMessage Promise has no .catch()
  use-genie-chat.ts:402-425 — The connectSSE call is fire-and-forget with only .then(). If connectSSE rejects for an uncaught reason, the status stays stuck at
  "streaming" forever with an unhandled rejection. Fix: Add .catch() to set error state.

  5. useCallback dependency chain is fragile
  use-genie-chat.ts — The chain processStreamEvent → pollPendingMessage → loadHistory → useEffect is stable today only because processStreamEvent closes over
  stable props. Adding any state dependency (like conversationId) would cause the entire chain to collapse, triggering effect re-fires and history reloads. Fix:
  Store processStreamEvent in a ref to break the identity cascade, making pollPendingMessage and loadHistory stable.

pls double check if that makes sense and create a plan to fix it , if so 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants