Migrate chat context and RAG logic to Drizzle ORM#603
Migrate chat context and RAG logic to Drizzle ORM#603AndlerRL merged 9 commits intovibe/threadrefactorfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
1 Skipped Deployment
|
Reviewer's GuideRefactors Masterbot chat/workspace RAG plumbing away from Hasura GraphQL into Drizzle/pg-based helpers, centralizes MB chat context and operation metadata, and tightens message typing across chat, workspace, and popup flows while updating AI/RAG dependencies and migrations. Sequence diagram for RAG-enhanced chat append flowsequenceDiagram
actor User
participant ChatUI as ChatPanelPro
participant MBChat as MBChatProvider
participant ChatCtx as chat_context_utils
participant RagOps as rag_operations
participant ScopeWin as get_scope_conversation_window
participant Retrieve as retrieve_assistant_answer_memories
participant DB as mb_drizzle_db
User->>ChatUI: Submit prompt
ChatUI->>ChatUI: toActionType(activeAction)
ChatUI->>ChatUI: toScope(isWorkspaceActive, activeSectionId, activeDocumentName, docRootId)
ChatUI->>MBChat: appendWithMbContextPrompts(userMessage, chatRequestOptions, workspacePrompts, scope)
MBChat->>ChatCtx: prepareMbChatContext(userMessage, chatRequestOptions, workspacePrompts, scope, session, chatbot, activeThread, allMessages, randomThreadId, threadId, isWorkspaceActive)
ChatCtx->>ChatCtx: normalizeScopeId(scopeType, scopeId, docNodeId)
ChatCtx->>ChatCtx: buildHeadingContextBlocks(fullMarkdown, sections, activeSectionId)
ChatCtx->>ChatCtx: blocksToSystemPrompts(blocks)
ChatCtx-->>MBChat: enrichedChatOptions, optimisticThread, nextOpMeta, effectiveWorkspacePrompts
MBChat->>MBChat: set opMetaRef.nextOpMeta
MBChat->>MBChat: updateActiveThread(optimisticThread)
MBChat->>ChatCtx: getRagContext(threadId, userPrompt, mbOpMeta, isRagEnabled, shouldUseQChain, allMessages, previousAiUserMessages, clickedContentId)
ChatCtx->>ScopeWin: selectScopeConversationWindow(threadId, scopeType, scopeId, docNodeId)
ScopeWin->>DB: query message table with scope filter
DB-->>ScopeWin: userQuestions, lastAssistant, lastUser
ChatCtx->>Retrieve: retrieveAssistantAnswerMemories(threadId, query, scopeType, scopeId, docNodeId, lastAssistantMessageId, k, minCosine, tokenBudget)
Retrieve->>DB: select assistantAnswers with vector similarity
DB-->>Retrieve: matching memories
Retrieve-->>ChatCtx: memories
ChatCtx-->>MBChat: finalUserPrompt, scopedHistory
MBChat->>MBChat: append(userMessage with finalUserPrompt, enrichedChatOptions)
MBChat-->>User: Streamed assistant response
MBChat->>RagOps: maybeIngestAssistantAnswer(threadId, messageId, content, scopeType, docNodeId, scopeId, ragEligible)
RagOps->>DB: insert assistantAnswers (on conflict do nothing)
DB-->>RagOps: ack
Class diagram for OperationMeta, MbScopeInput and chat/RAG context helpersclassDiagram
class OperationMeta {
+UiMode uiMode
+ScopeType scopeType
+string docNodeId
+WorkspaceActionType actionType
+boolean mutationAllowed
+boolean mutationApplied
+IntentSource intentSource
+boolean excludeFromQChain
+boolean ragEligible
+boolean docContextIncluded
+string headingContentVersion
+string scopeId
+DocContextPolicy docContextPolicy
}
class OperationMetaPatch {
<<type alias>>
+Partial~OperationMeta~
}
class MbScopeInput {
+ScopeType scopeType
+string docNodeId
+string scopeId
+MarkdownSection[] sections
+WorkspaceActionType actionType
+string fullMarkdown
+boolean excludeFromQChain
+boolean ragEligible
+boolean mutationAllowed
+IntentSource intentSource
}
class PrepareChatContextParams {
+AiMessage userMessage
+ChatRequestOptions chatRequestOptions
+AiMessage[] workspacePrompts
+MbScopeInput scope
+Session session
+Chatbot chatbot
+Thread activeThread
+AiMessage[] allMessages
+string randomThreadId
+string threadId
+boolean isWorkspaceActive
}
class PrepareChatContextResult {
+boolean isValid
+OperationMetaPatch error
+ChatRequestOptions enrichedChatOptions
+Thread optimisticThread
+OperationMeta nextOpMeta
+AiMessage[] effectiveWorkspacePrompts
+boolean docContextIncluded
+string headingContentVersion
+string scopeId
}
class GetRagContextParams {
+string threadId
+string userPrompt
+OperationMetaPatch mbOpMeta
+boolean isRagEnabled
+boolean shouldUseQChain
+AiMessage[] allMessages
+AiMessage[] previousAiUserMessages
+string clickedContentId
}
class GetRagContextResult {
+string finalUserPrompt
+AiMessage[] scopedHistory
}
class chat_context_utils {
+PrepareChatContextResult prepareMbChatContext(userMessage, chatRequestOptions, workspacePrompts, scope, session, chatbot, activeThread, allMessages, randomThreadId, threadId, isWorkspaceActive)
+GetRagContextResult getRagContext(threadId, userPrompt, mbOpMeta, isRagEnabled, shouldUseQChain, allMessages, previousAiUserMessages, clickedContentId)
}
class rag_operations {
+void maybeIngestAssistantAnswer(threadId, messageId, content, scopeType, docNodeId, scopeId, ragEligible)
+string normalizeScopeId(scopeType, scopeId, docNodeId)
+HeadingContextResult buildHeadingContextBlocks(fullMarkdown, sections, activeSectionId)
+Message[] blocksToSystemPrompts(blocks)
}
class get_scope_conversation_window {
+ScopeConversationWindow selectScopeConversationWindow(threadId, scopeType, scopeId, docNodeId)
}
class retrieve_assistant_answer_memories {
+AssistantAnswerMemory[] retrieveAssistantAnswerMemories(args)
+AssistantAnswerMemory[] enforceTokenBudget(items, tokenBudget)
+AssistantAnswerMemory[] runSearch(scopeType, scopeId, docNodeId, threadId, embedding, minCosine, excludeMessageId, k)
}
class ingest_assistant_answer {
+void ingestAssistantAnswer(args)
}
class WorkspaceActionType {
<<enum>>
ask
rewrite
expand
edit
unknown
}
class ScopeType {
<<enum>>
chat_thread
doc_root
heading_thread
}
class IntentSource {
<<enum>>
button
freeform
ai_action
unknown
}
class DocContextPolicy {
<<enum>>
always_include_latest_heading
}
OperationMetaPatch --> OperationMeta
MbScopeInput --> ScopeType
MbScopeInput --> WorkspaceActionType
MbScopeInput --> IntentSource
OperationMeta --> ScopeType
OperationMeta --> WorkspaceActionType
OperationMeta --> IntentSource
OperationMeta --> DocContextPolicy
chat_context_utils --> OperationMeta
chat_context_utils --> OperationMetaPatch
chat_context_utils --> MbScopeInput
rag_operations --> ScopeType
rag_operations --> MbScopeInput
get_scope_conversation_window --> ScopeType
retrieve_assistant_answer_memories --> ScopeType
ingest_assistant_answer --> ScopeType
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
📝 WalkthroughWalkthroughThis PR consolidates chat context and RAG logic by migrating from Hasura-based queries to server-side Drizzle ORM functions, refactoring hook-based state management, introducing new type-safe utilities, and standardizing operation metadata to camelCase while improving type safety across chat, workspace, and document management systems. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- In
saveNewMessagethe retry branch now re-sends the sameinsertObject(including any unsupported fields) instead of stripping the optional op-meta fields as before, which means a schema lag on Hasura will no longer gracefully degrade and will just keep failing—consider restoring the field-stripping behavior on retry so writes remain robust during migrations. prepareMbChatContextcurrently treats any non-chat_threadscope without adocNodeIdas invalid and surfaces an error; this is stricter than the previous behavior and may break callers that don’t yet passdocNodeId(e.g. older workspace or thread flows) — you may want to either default/fallback thedocNodeIdor only enforce it for the specific scopes where RAG actually relies on it.- There are now many ad-hoc casts and duplicated
AiMessage/Messagehandling (e.g. inuse-mb-chat,ChatMessageActions,ThreadPopup, prompt helpers); introducing small, well-typed conversion helpers between DB messages and UI messages would simplify these code paths and reduce the risk of subtle role/id mismatches.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `saveNewMessage` the retry branch now re-sends the same `insertObject` (including any unsupported fields) instead of stripping the optional op-meta fields as before, which means a schema lag on Hasura will no longer gracefully degrade and will just keep failing—consider restoring the field-stripping behavior on retry so writes remain robust during migrations.
- `prepareMbChatContext` currently treats any non-`chat_thread` scope without a `docNodeId` as invalid and surfaces an error; this is stricter than the previous behavior and may break callers that don’t yet pass `docNodeId` (e.g. older workspace or thread flows) — you may want to either default/fallback the `docNodeId` or only enforce it for the specific scopes where RAG actually relies on it.
- There are now many ad-hoc casts and duplicated `AiMessage`/`Message` handling (e.g. in `use-mb-chat`, `ChatMessageActions`, `ThreadPopup`, prompt helpers); introducing small, well-typed conversion helpers between DB messages and UI messages would simplify these code paths and reduce the risk of subtle role/id mismatches.
## Individual Comments
### Comment 1
<location> `apps/pro-web/lib/chat/chat-context-utils.ts:64-61` </location>
<code_context>
+ }
+ }
+
+ if (!scope?.docNodeId) {
+ return {
+ isValid: scope?.scopeType === 'chat_thread',
+ error: {
+ type: 'error',
+ text: 'No document node ID provided for scope',
+ },
+ enrichedChatOptions: chatRequestOptions || {},
+ optimisticThread: {} as Thread,
+ nextOpMeta: DEFAULT_OP_META,
+ effectiveWorkspacePrompts: workspacePrompts || [],
+ docContextIncluded: false,
+ headingContentVersion: 0,
+ scopeId: null,
+ }
+ }
+
</code_context>
<issue_to_address>
**issue:** `prepareMbChatContext` rejects calls without `scope.docNodeId`, which can break pure chat flows that previously didn't use scoped doc context.
This early return makes `isValid` false and surfaces an error whenever `scope` is missing or has no `docNodeId`, except when `scope.scopeType === 'chat_thread'`. Previously, callers could omit `scope` for standard chat, so any remaining `appendWithMbContextPrompts` usages without `scope` will now error instead of running as unscoped chat.
You could either:
- Treat an omitted `scope` as a valid pure chat (skip the `docNodeId` check when `scope` is undefined), or
- Default `scopeType` to `'chat_thread'` when `scope` is missing and allow missing `docNodeId` for that scope.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| docContextIncluded: false, | ||
| headingContentVersion: 0, | ||
| scopeId: null, | ||
| } |
There was a problem hiding this comment.
issue: prepareMbChatContext rejects calls without scope.docNodeId, which can break pure chat flows that previously didn't use scoped doc context.
This early return makes isValid false and surfaces an error whenever scope is missing or has no docNodeId, except when scope.scopeType === 'chat_thread'. Previously, callers could omit scope for standard chat, so any remaining appendWithMbContextPrompts usages without scope will now error instead of running as unscoped chat.
You could either:
- Treat an omitted
scopeas a valid pure chat (skip thedocNodeIdcheck whenscopeis undefined), or - Default
scopeTypeto'chat_thread'whenscopeis missing and allow missingdocNodeIdfor that scope.
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/pro-web/services/hasura/hasura.service.ts (1)
738-768:⚠️ Potential issue | 🟠 MajorFix non-idempotent retry to prevent false errors when first insert succeeds
Lines 738–768 retry the same insert unconditionally with identical
insertObject(samemessageId). If the first insert succeeds but the response is lost (timeout/network failure), the retry will fail with a unique constraint violation, causing the caller to receive a "Failed to save new message" error even though the message was persisted.The schema enforces
unique('message_id_key').on(table.messageId), so this pattern masks successful inserts as failures.Recommendation: Either (1) remove the retry and rely on idempotency elsewhere, (2) make the insert idempotent with upsert logic (
onConflict+doUpdateSet), or (3) make retry conditional on transport errors (not DB constraint violations).apps/pro-web/lib/constants/prompts.ts (1)
145-170:⚠️ Potential issue | 🟡 MinorHandle
Message.partswhenexamplesis absent.When
messagesWithImagePartcontainsMessage(non-DB), image parts are currently ignored, so the prompt can miss images.🛠️ Suggested fix
-const parts = (msgImg as MessageDB).examples as - | Message['parts'] - | undefined -return parts || [] +const parts = + ((msgImg as MessageDB).examples as Message['parts'] | undefined) ?? + (msgImg as Message).parts +return parts ?? []apps/pro-web/lib/hooks/use-workspace-document-content-hook.ts (1)
329-349:⚠️ Potential issue | 🟡 MinorAvoid passing stale selection ranges into expand/rewrite actions.
selectionRangenow feedshandleWorkspaceEdit. If a user previously selected text,inferOperationTypecan flip expand/rewrite into edit. Also,selectionRangeisn’t in the callback deps, so a stale range may be used. Consider clearing the selection for expand/rewrite and addingselectionRange/handleWorkspaceEditto the deps.✅ Possible adjustment
- await handleWorkspaceEdit(prompt, metaPrompt, selectionRange, actionType) + await handleWorkspaceEdit(prompt, metaPrompt, null, actionType)- }, [sections, activeSection, projectName, documentName, documentType]) + }, [sections, activeSection, projectName, documentName, documentType, handleWorkspaceEdit])Also applies to: 353-373
apps/pro-web/lib/hooks/use-workspace-chat.tsx (1)
862-979:⚠️ Potential issue | 🟡 MinorRespect explicit
actionTypewhen deriving operation meta.
inferOperationTypecan override the user-selected action when a selection range exists, butragEligible/excludeFromQChainstill use the passedactionType. This can produce inconsistent op-meta. Prefer the explicitactionTypewhen provided and use the same value for flags.🔧 Align op meta
- const operationType = inferOperationType(metaPrompt, selectionRange || null) + const operationType = + actionType ?? inferOperationType(metaPrompt, selectionRange || null) ... - actionType: operationType, - ragEligible: actionType === 'ask', - excludeFromQChain: actionType !== 'ask', + actionType: operationType, + ragEligible: operationType === 'ask', + excludeFromQChain: operationType !== 'ask',
🤖 Fix all issues with AI agents
In `@apps/pro-web/app/actions/chat-memory/get-scope-conversation-window.ts`:
- Around line 112-185: selectScopeConversationWindow currently treats a
whitespace-only threadId as truthy and passes null scopeId/docNodeId into DB
queries, and returns undefined for missing lastAssistant/lastUser; to fix, trim
threadId at the top and short-circuit (return empty userQuestions and null
lastAssistant/lastUser) if trimmed threadId is empty, update the function
signature/usage to accept nullable scopeId/docNodeId and pass them through into
buildScopeFilter unchanged, and after the Promise.all convert userQuestionsRows
to Message[] and explicitly set lastAssistant and lastUser to null when
lastAssistantRows?.[0] or lastUserRows?.[0] are missing (instead of leaving
undefined) so types and legacy behavior match.
In `@apps/pro-web/app/actions/chat-memory/ingest-assistant-answer.ts`:
- Around line 64-88: Wrap the read-then-insert for turnIndex allocation inside a
Drizzle transaction and take a per-thread advisory lock to serialize
allocations: move the query of assistantAnswers.findFirst and the subsequent
db.insert(...) into a single await db.transaction(async (tx) => { ... }) block,
run a tx-level advisory lock keyed by threadId (e.g., pg_advisory_xact_lock
using tx.execute or tx.raw SQL) before computing nextTurnIndex, then perform the
insert on tx (keeping the .onConflictDoNothing on assistantAnswers.messageId) so
concurrent requests for the same thread cannot compute the same nextTurnIndex;
update references to use tx.query/tx.insert instead of db.query/db.insert and
default scopeType logic remains unchanged.
- Around line 34-85: The code casts scopeType to ScopeType without validating
its value before calling db.insert(assistantAnswers), which can cause DB insert
failures; add a type guard that checks scopeType against the allowed ScopeType
values (or enum/union) and normalize it: if scopeType is null/undefined use
'chat_thread', if it is one of the permitted ScopeType strings use it, otherwise
bail/skip or map to a safe default before the insert; implement this check near
where scopeType is read (the variables destructure and before computing
nextTurnIndex/values) and replace the direct cast (scopeType as ScopeType) with
the validated/normalized variable.
In `@apps/pro-web/app/actions/chat-memory/retrieve-assistant-answer-memories.ts`:
- Around line 39-83: The code currently allows searching with scopeType
'heading_thread'/'doc_root' even when docNodeId is missing, which can cause
cross‑scope matches; update the logic around runSearch and the doc_root fallback
to first guard that docNodeId is present (non-empty) before performing the
primary scoped search or the doc_root fallback: check scopeType (e.g.,
'heading_thread' / 'doc_root') and require docNodeId before calling
runSearch/creating runSearchArgs and before executing the enableDocRootFallback
fallback path that builds fallbackMemories; ensure you only spread runSearchArgs
and set scopeType: 'doc_root' when docNodeId exists, and keep subsequent
deduplication (uniqueMemories) and enforceTokenBudget unchanged.
In `@apps/pro-web/lib/chat/chat-context-utils.ts`:
- Around line 102-205: The code is converting the heading content hash to a
Number which yields NaN and drops the version; remove the Number(...) cast and
keep headingContentVersion as the original string (assigned from
built.headingContentVersion), then update places that assumed a numeric default
(mbOpMeta.headingContentVersion and nextOpMeta.headingContentVersion) to accept
the string (use the string value or an empty string/default string instead of 0)
so the computeHeadingContentVersion hash (e.g., fnv1a32:...) propagates
correctly; relevant symbols: headingContentVersion, built.headingContentVersion,
computeHeadingContentVersion, mbOpMeta, nextOpMeta.
- Around line 64-79: The function currently accepts an optional parameter named
scope but then branches on scope?.docNodeId; make the parameter non-optional to
match callers by updating the function signature to require scope (and its
required fields like docNodeId/scopeType) and adjust any type/interface (e.g.,
Scope type) used by this function, or if you intentionally allow missing scope,
add a JSDoc above this function documenting that scope may be omitted for
chat_thread flows and explaining the behavior when docNodeId is absent;
reference the parameter name scope, the docNodeId property, and the chat_thread
branch in your changes so callers and type definitions are updated accordingly.
In `@apps/pro-web/lib/constants/prompts.ts`:
- Around line 103-108: The console.log that prints "—————— lastQuestionString
roles ——————" (which maps over questions) is unguarded and will run in
production; wrap that logging in a development-only check (e.g., if
(process.env.NODE_ENV !== 'production') { ... } or use an existing
isDev/isDevelopment flag) so the console.log call and the questions.map(...)
only execute when running in dev mode to avoid noisy production output.
In `@apps/pro-web/lib/constants/rag/operations.ts`:
- Around line 4-18: DEFAULT_OP_META currently contains a module-evaluated
docNodeId (generated by nanoid at import time), causing a shared static value;
replace this constant with a factory that returns a fresh OperationMeta object
each time (e.g., createDefaultOpMeta or makeDefaultOperationMeta) that sets
docNodeId to a new `${nanoid(4)}:${nanoid(5)}` on each call, keep other fields
identical, and update call sites to use the factory instead of importing
DEFAULT_OP_META directly so callers get unique docNodeId values.
In `@apps/pro-web/lib/hooks/use-mb-chat.tsx`:
- Around line 604-612: When building the merged thread in handleFinish (the
newThread object created from activeThread), filter out system prompts from both
messages and newThreadMessages before concatenation so thread.messages contains
only user/assistant content; update the creation of newThread.messages (in
handleFinish) to use filtered arrays (e.g., filter by role !== 'system' or the
equivalent message type check) and then concatenate those filtered arrays into
Thread['messages'].
In `@apps/pro-web/lib/queries/use-workspace-structure.ts`:
- Around line 42-49: The function makeHeadingDocNodeId currently returns an
empty string when inputs are missing which can bypass nullish coalescing; change
its signature to return string | null and return null instead of '' when root or
sec are falsy, keeping the `${root}::${sec}` return for valid inputs, and update
any callers that rely on an empty-string fallback to handle null (or use `||`
instead of `??`) so fallbacks continue to work.
In `@apps/pro-web/lib/select-scope-conversation-window.test.ts`:
- Line 22: The test still mocks mb-genql and inspects GraphQL where clauses but
the implementation now uses the Drizzle-backed action
selectScopeConversationWindow; update the test to mock the mb-drizzle export
(the db chain) used by that action instead of mb-genql: create a mock db object
with the chainable methods your action calls (e.g., db.select / db.from /
db.where / db.orderBy / db.limit or the exact method names used in
selectScopeConversationWindow) and assert those methods are called with the
expected arguments, or convert the test to an integration test; ensure the mock
is injected via jest.mock('mb-drizzle', ...) and that your assertions reference
the selectScopeConversationWindow behavior rather than GraphQL where clauses.
In `@apps/pro-web/services/hasura/hasura.service.ts`:
- Around line 726-736: In saveNewMessage, canonicalDocNodeId is derived only
from docNodeId so callers passing snake_case doc_node_id get dropped; update the
logic that computes canonicalDocNodeId to consider both docNodeId and
doc_node_id (e.g., use docNodeId ?? doc_node_id ?? null) and then build
insertObject the same way (include docNodeId only when canonicalDocNodeId is
non-null) so the snake_case value is preserved during the migration; update
references to canonicalDocNodeId, the params destructuring, and the insertObject
creation accordingly.
🧹 Nitpick comments (12)
apps/hasura/migrations/masterbots/1767498310896_add_public_document_operations/up.sql (1)
26-27: Consider enforcingoperation_typevalues with a CHECK (or enum).The allowed values are only documented in a comment, so invalid values can slip into the table. Since this is the table creation migration, it’s a good spot to enforce the allowed set at the DB layer.
♻️ Proposed change
- operation_type text NOT NULL, -- 'ask' | 'rewrite' | 'expand' | 'edit' | 'unknown' + operation_type text NOT NULL + CHECK (operation_type IN ('ask', 'rewrite', 'expand', 'edit', 'unknown')),apps/pro-web/components/routes/chat/chat-list/message-pair-accordion.tsx (1)
500-509: Consider stabilizing map keys withgetMessageStableId.If
message.id/message.slugcan be missing during optimistic or streaming states, falling back to the index can cause remounts when IDs appear. You already have a helper that checksmessageIdtoo, so reusing it keeps keys stable.♻️ Suggested tweak for stable keys
- {pair.chatGptMessage.map((message, i) => ( - <Fragment key={message.id || message.slug || `msg-${i}`}> + {pair.chatGptMessage.map((message, i) => { + const stableId = getMessageStableId(message, `msg-${i}`) + return ( + <Fragment key={stableId}> ... - </Fragment> - ))} + </Fragment> + ) + })}apps/pro-web/app/actions/chat-memory/ingest-assistant-answer.ts (1)
8-23: Prefer an interface for the args object.This is an object shape; using an interface aligns with the repo’s TypeScript guidelines.
♻️ Suggested refactor
-export type IngestAssistantAnswerArgs = { +export interface IngestAssistantAnswerArgs { threadId: string messageId: string content: string @@ docNodeId?: string | null -} +}As per coding guidelines: “Use interface for objects/classes; type for unions/tuples/aliases.”
apps/pro-web/services/hasura/hasura.service.ts (1)
726-726: Add an explicit return type forsaveNewMessageLine 726 is missing a return type. Please annotate it (likely
Promise<Message>) to match codebase conventions.✍️ Suggested change
-export async function saveNewMessage(params: Partial<SaveNewMessageParams>) { +export async function saveNewMessage( + params: Partial<SaveNewMessageParams>, +): Promise<Message> {As per coding guidelines: Add explicit type annotations for function parameters and return values.
apps/pro-web/components/routes/chat/chat-message-actions.tsx (1)
30-32: Consolidate message id/content access with a type‑guard helper.The repeated casts to
AiMessage/Messageare easy to drift. A small helper +intype guard keeps the logic consistent and safer to evolve.♻️ Suggested helper extraction
+function getMessageId(msg: AiMessage | Message): string { + return 'messageId' in msg ? msg.messageId : msg.id +} + +function getMessageContent(msg: AiMessage | Message): string { + return msg.content ?? '' +} ... -const messageId = - (message as AiMessage)?.id || (message as Message)?.messageId +const messageId = getMessageId(message) ... -copyToClipboard(message.content) +copyToClipboard(getMessageContent(message))Apply the helper pattern across the remaining
messageId/message.contentcall sites in this component.As per coding guidelines: Use type guards to narrow types within conditionals.
Also applies to: 72-83, 109-123, 225-245, 274-316, 343-345, 364-365
apps/pro-web/app/actions/chat-memory/retrieve-assistant-answer-memories.ts (1)
194-205: Use an interface (and nullable IDs) for the args shape.This is an object shape, so
interfaceis preferred, and allowingscopeId/docNodeIdto be nullable matches the null‑aware logic above.♻️ Suggested refactor
-export type RetrieveAssistantAnswerMemoriesArgs = { +export interface RetrieveAssistantAnswerMemoriesArgs { threadId: string query: string scopeType: ScopeType - scopeId: string - docNodeId: string + scopeId: string | null + docNodeId: string | null lastAssistantMessageId: string | null k?: number minCosine?: number tokenBudget?: number enableDocRootFallback?: boolean -} +}As per coding guidelines: Use interface for objects/classes; type for unions/tuples/aliases; enum for fixed sets.
apps/pro-web/app/actions/chat-memory/get-scope-conversation-window.ts (1)
12-23: Prefer interfaces for object shapes (and allow nullable results).This keeps types aligned with project guidelines and lets the return shape express “no result” explicitly.
♻️ Suggested refactor
-export type ScopeConversationWindow = { - lastUser?: Message | UIMessage - lastAssistant?: Message | UIMessage - userQuestions: Array<Message | UIMessage> -} +export interface ScopeConversationWindow { + lastUser: Message | UIMessage | null + lastAssistant: Message | UIMessage | null + userQuestions: Array<Message | UIMessage> +} -type ScopeWhereArgs = { +interface ScopeWhereArgs { threadId: string scopeType: ScopeType scopeId: string | null docNodeId: string | null -} +}As per coding guidelines: Use interface for objects/classes; type for unions/tuples/aliases; enum for fixed sets.
apps/pro-web/types/operation-meta.ts (2)
31-42: Use an interface forMbScopeInput.Keep object-shape types consistent.
As per coding guidelines: Use interface for objects/classes; type for unions/tuples/aliases; enum for fixed sets; const for literals.♻️ Suggested refactor
-export type MbScopeInput = { +export interface MbScopeInput { scopeType: ScopeType docNodeId: string scopeId: string sections: MarkdownSection[] actionType: WorkspaceActionType fullMarkdown: string excludeFromQChain: boolean ragEligible: boolean mutationAllowed: boolean intentSource: IntentSource -} +}
13-26: Convert MbScopeInput from type alias to interface for consistency with object definitions.Per coding guidelines, use
interfacefor object shapes.MbScopeInput(line 31) is currently defined as atypebut should be aninterfaceto align with the project's type definition practices.The camelCase fields in
OperationMetaare correctly mapped to snake_case in the persistedmetadata.mb_opfield at the serialization layer—no conversion issue exists at the read/write boundaries.apps/pro-web/types/index.ts (1)
222-234: Consider convertingJSONResponseStreamto an interface.
mbOpMetaaddition looks good; switching to an interface keeps object-shape types consistent.As per coding guidelines: Use interface for objects/classes; type for unions/tuples/aliases; enum for fixed sets; const for literals.♻️ Suggested refactor
-export type JSONResponseStream = { +export interface JSONResponseStream { model: string id?: string messages?: OpenAI.ChatCompletionMessageParam[] previewToken?: string isPowerUp?: boolean webSearch?: boolean stream?: boolean temperature?: number maxTokens?: number chatbot?: Pick<Chatbot, 'categories' | 'chatbotId'> mbOpMeta?: OperationMetaPatch -} +}apps/pro-web/lib/constants/prompts.ts (1)
375-387: Use an interface forWorkspaceMetaPromptProps.Object-shape types should use interfaces.
As per coding guidelines: Use interface for objects/classes; type for unions/tuples/aliases; enum for fixed sets; const for literals.♻️ Suggested refactor
-export type WorkspaceMetaPromptProps = { +export interface WorkspaceMetaPromptProps { documentName: string sections: MarkdownSection[] activeSectionTitle: string actionType?: WorkspaceActionType highlightedContent?: string -} +}apps/pro-web/lib/chat/chat-input-utils.ts (1)
4-16: Return'unknown'for unrecognized actions.This avoids silently treating unsupported actions as
askand lets callers decide the fallback.♻️ Suggested refactor
default: - return 'ask' + return 'unknown'
| export async function selectScopeConversationWindow(args: { | ||
| threadId: string | ||
| scopeType: ScopeType | ||
| scopeId: string | ||
| docNodeId: string | ||
| }): Promise<ScopeConversationWindow> { | ||
| const { threadId, scopeType, scopeId, docNodeId } = args | ||
| if (!threadId) | ||
| return { userQuestions: [], lastAssistant: undefined, lastUser: undefined } | ||
|
|
||
| try { | ||
| const scopeFilter = buildScopeFilter({ | ||
| threadId, | ||
| scopeType, | ||
| scopeId, | ||
| docNodeId, | ||
| }) | ||
|
|
||
| const columnsToSelect = { | ||
| messageId: message.messageId, | ||
| content: message.content, | ||
| createdAt: message.createdAt, | ||
| role: message.role, | ||
| excludeFromQChain: message.excludeFromQChain, | ||
| } | ||
|
|
||
| // Compose queries | ||
| const userQuestionsPromise = db | ||
| .select(columnsToSelect) | ||
| .from(message) | ||
| .where( | ||
| and( | ||
| scopeFilter, | ||
| eq(message.role, 'user'), | ||
| // ? Preserving behavior: include rows where excludeFromQChain is false OR null. | ||
| or( | ||
| eq(message.excludeFromQChain, false), | ||
| isNull(message.excludeFromQChain), | ||
| ), | ||
| ), | ||
| ) | ||
| .orderBy(asc(message.createdAt)) | ||
|
|
||
| const lastAssistantPromise = db | ||
| .select(columnsToSelect) | ||
| .from(message) | ||
| .where(and(scopeFilter, eq(message.role, 'assistant'))) | ||
| .orderBy(desc(message.createdAt)) | ||
| .limit(1) | ||
|
|
||
| const lastUserPromise = db | ||
| .select(columnsToSelect) | ||
| .from(message) | ||
| .where(and(scopeFilter, eq(message.role, 'user'))) | ||
| .orderBy(desc(message.createdAt)) | ||
| .limit(1) | ||
|
|
||
| const [userQuestionsRows, lastAssistantRows, lastUserRows] = | ||
| await Promise.all([ | ||
| userQuestionsPromise, | ||
| lastAssistantPromise, | ||
| lastUserPromise, | ||
| ]) | ||
|
|
||
| return { | ||
| userQuestions: userQuestionsRows as Message[], | ||
| lastAssistant: lastAssistantRows?.[0] as Message, | ||
| lastUser: lastUserRows?.[0] as Message, | ||
| } | ||
| } catch (error) { | ||
| console.error('[selectScopeConversationWindow] Error:', error) | ||
| // Fail-open: never block prompt assembly. | ||
| return { userQuestions: [], lastAssistant: undefined, lastUser: undefined } | ||
| } |
There was a problem hiding this comment.
Handle whitespace thread IDs and nullable scope inputs to preserve legacy behavior.
if (!threadId) won’t catch ' ', and callers/tests pass scopeId/docNodeId: null. This currently triggers DB queries and type mismatches. Trim threadId, accept nullable scopeId/docNodeId, and return explicit nulls when no rows.
🛠️ Suggested fix
-export async function selectScopeConversationWindow(args: {
- threadId: string
- scopeType: ScopeType
- scopeId: string
- docNodeId: string
-}): Promise<ScopeConversationWindow> {
- const { threadId, scopeType, scopeId, docNodeId } = args
- if (!threadId)
- return { userQuestions: [], lastAssistant: undefined, lastUser: undefined }
+export async function selectScopeConversationWindow(args: {
+ threadId: string
+ scopeType: ScopeType
+ scopeId: string | null
+ docNodeId: string | null
+}): Promise<ScopeConversationWindow> {
+ const { threadId, scopeType, scopeId, docNodeId } = args
+ const normalizedThreadId = threadId.trim()
+ if (!normalizedThreadId) {
+ return { userQuestions: [], lastAssistant: null, lastUser: null }
+ }
...
- const scopeFilter = buildScopeFilter({
- threadId,
+ const scopeFilter = buildScopeFilter({
+ threadId: normalizedThreadId,
scopeType,
scopeId,
docNodeId,
})
...
- return {
- userQuestions: userQuestionsRows as Message[],
- lastAssistant: lastAssistantRows?.[0] as Message,
- lastUser: lastUserRows?.[0] as Message,
- }
+ return {
+ userQuestions: userQuestionsRows as Message[],
+ lastAssistant: (lastAssistantRows?.[0] ?? null) as Message | null,
+ lastUser: (lastUserRows?.[0] ?? null) as Message | null,
+ }
} catch (error) {
console.error('[selectScopeConversationWindow] Error:', error)
// Fail-open: never block prompt assembly.
- return { userQuestions: [], lastAssistant: undefined, lastUser: undefined }
+ return { userQuestions: [], lastAssistant: null, lastUser: null }
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function selectScopeConversationWindow(args: { | |
| threadId: string | |
| scopeType: ScopeType | |
| scopeId: string | |
| docNodeId: string | |
| }): Promise<ScopeConversationWindow> { | |
| const { threadId, scopeType, scopeId, docNodeId } = args | |
| if (!threadId) | |
| return { userQuestions: [], lastAssistant: undefined, lastUser: undefined } | |
| try { | |
| const scopeFilter = buildScopeFilter({ | |
| threadId, | |
| scopeType, | |
| scopeId, | |
| docNodeId, | |
| }) | |
| const columnsToSelect = { | |
| messageId: message.messageId, | |
| content: message.content, | |
| createdAt: message.createdAt, | |
| role: message.role, | |
| excludeFromQChain: message.excludeFromQChain, | |
| } | |
| // Compose queries | |
| const userQuestionsPromise = db | |
| .select(columnsToSelect) | |
| .from(message) | |
| .where( | |
| and( | |
| scopeFilter, | |
| eq(message.role, 'user'), | |
| // ? Preserving behavior: include rows where excludeFromQChain is false OR null. | |
| or( | |
| eq(message.excludeFromQChain, false), | |
| isNull(message.excludeFromQChain), | |
| ), | |
| ), | |
| ) | |
| .orderBy(asc(message.createdAt)) | |
| const lastAssistantPromise = db | |
| .select(columnsToSelect) | |
| .from(message) | |
| .where(and(scopeFilter, eq(message.role, 'assistant'))) | |
| .orderBy(desc(message.createdAt)) | |
| .limit(1) | |
| const lastUserPromise = db | |
| .select(columnsToSelect) | |
| .from(message) | |
| .where(and(scopeFilter, eq(message.role, 'user'))) | |
| .orderBy(desc(message.createdAt)) | |
| .limit(1) | |
| const [userQuestionsRows, lastAssistantRows, lastUserRows] = | |
| await Promise.all([ | |
| userQuestionsPromise, | |
| lastAssistantPromise, | |
| lastUserPromise, | |
| ]) | |
| return { | |
| userQuestions: userQuestionsRows as Message[], | |
| lastAssistant: lastAssistantRows?.[0] as Message, | |
| lastUser: lastUserRows?.[0] as Message, | |
| } | |
| } catch (error) { | |
| console.error('[selectScopeConversationWindow] Error:', error) | |
| // Fail-open: never block prompt assembly. | |
| return { userQuestions: [], lastAssistant: undefined, lastUser: undefined } | |
| } | |
| export async function selectScopeConversationWindow(args: { | |
| threadId: string | |
| scopeType: ScopeType | |
| scopeId: string | null | |
| docNodeId: string | null | |
| }): Promise<ScopeConversationWindow> { | |
| const { threadId, scopeType, scopeId, docNodeId } = args | |
| const normalizedThreadId = threadId.trim() | |
| if (!normalizedThreadId) { | |
| return { userQuestions: [], lastAssistant: null, lastUser: null } | |
| } | |
| try { | |
| const scopeFilter = buildScopeFilter({ | |
| threadId: normalizedThreadId, | |
| scopeType, | |
| scopeId, | |
| docNodeId, | |
| }) | |
| const columnsToSelect = { | |
| messageId: message.messageId, | |
| content: message.content, | |
| createdAt: message.createdAt, | |
| role: message.role, | |
| excludeFromQChain: message.excludeFromQChain, | |
| } | |
| // Compose queries | |
| const userQuestionsPromise = db | |
| .select(columnsToSelect) | |
| .from(message) | |
| .where( | |
| and( | |
| scopeFilter, | |
| eq(message.role, 'user'), | |
| // ? Preserving behavior: include rows where excludeFromQChain is false OR null. | |
| or( | |
| eq(message.excludeFromQChain, false), | |
| isNull(message.excludeFromQChain), | |
| ), | |
| ), | |
| ) | |
| .orderBy(asc(message.createdAt)) | |
| const lastAssistantPromise = db | |
| .select(columnsToSelect) | |
| .from(message) | |
| .where(and(scopeFilter, eq(message.role, 'assistant'))) | |
| .orderBy(desc(message.createdAt)) | |
| .limit(1) | |
| const lastUserPromise = db | |
| .select(columnsToSelect) | |
| .from(message) | |
| .where(and(scopeFilter, eq(message.role, 'user'))) | |
| .orderBy(desc(message.createdAt)) | |
| .limit(1) | |
| const [userQuestionsRows, lastAssistantRows, lastUserRows] = | |
| await Promise.all([ | |
| userQuestionsPromise, | |
| lastAssistantPromise, | |
| lastUserPromise, | |
| ]) | |
| return { | |
| userQuestions: userQuestionsRows as Message[], | |
| lastAssistant: (lastAssistantRows?.[0] ?? null) as Message | null, | |
| lastUser: (lastUserRows?.[0] ?? null) as Message | null, | |
| } | |
| } catch (error) { | |
| console.error('[selectScopeConversationWindow] Error:', error) | |
| // Fail-open: never block prompt assembly. | |
| return { userQuestions: [], lastAssistant: null, lastUser: null } | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@apps/pro-web/app/actions/chat-memory/get-scope-conversation-window.ts` around
lines 112 - 185, selectScopeConversationWindow currently treats a
whitespace-only threadId as truthy and passes null scopeId/docNodeId into DB
queries, and returns undefined for missing lastAssistant/lastUser; to fix, trim
threadId at the top and short-circuit (return empty userQuestions and null
lastAssistant/lastUser) if trimmed threadId is empty, update the function
signature/usage to accept nullable scopeId/docNodeId and pass them through into
buildScopeFilter unchanged, and after the Promise.all convert userQuestionsRows
to Message[] and explicitly set lastAssistant and lastUser to null when
lastAssistantRows?.[0] or lastUserRows?.[0] are missing (instead of leaving
undefined) so types and legacy behavior match.
| const { threadId, messageId, content, scopeType, scopeId, docNodeId } = args | ||
|
|
||
| if ( | ||
| !threadId || | ||
| !messageId || | ||
| !content || | ||
| (scopeType === 'heading_thread' && !scopeId) | ||
| ) | ||
| return | ||
|
|
||
| // 1) embed content (fail-closed) | ||
| let embedding: number[] | ||
| try { | ||
| embedding = await embedText1536(content) | ||
| } catch { | ||
| } catch (e) { | ||
| console.error('[ingestAssistantAnswer] Embedding failed:', e) | ||
| return | ||
| } | ||
| if (embedding.length !== 1536) return | ||
|
|
||
| // 2) content hash | ||
| const contentHash = sha256(content) | ||
|
|
||
| // 3) vector text literal for pgvector | ||
| const embeddingText = vectorToPgText(embedding) | ||
|
|
||
| // 4) compute next turn index per thread (keep existing behavior; do NOT scope-partition here) | ||
| const queryMaxTurnIndex = /* GraphQL */ ` | ||
| query MaxTurnIndex($threadId: uuid!) { | ||
| assistant_answers( | ||
| where: { thread_id: { _eq: $threadId } } | ||
| order_by: { turn_index: desc } | ||
| limit: 1 | ||
| ) { | ||
| turn_index | ||
| } | ||
| } | ||
| ` | ||
|
|
||
| type MaxTurnIndexData = { | ||
| assistant_answers: Array<{ turn_index: number | null }> | ||
| } | ||
|
|
||
| let nextTurnIndex = 1 | ||
| try { | ||
| const maxRes = await hasuraGql<MaxTurnIndexData>(queryMaxTurnIndex, { | ||
| threadId, | ||
| // 2) content hash | ||
| const contentHash = sha256(content) | ||
|
|
||
| // 3) vector text literal for pgvector | ||
| // Drizzle `vector` type often needs raw string or array depending on driver. | ||
| // Using array usually works with pg + pgvector. | ||
| // If Drizzle schema defines it as `vector(...)`, providing number[] usually works. | ||
|
|
||
| // 4) compute next turn index per thread | ||
| const lastEntry = await db.query.assistantAnswers.findFirst({ | ||
| where: eq(assistantAnswers.threadId, threadId), | ||
| orderBy: [desc(assistantAnswers.turnIndex)], | ||
| columns: { turnIndex: true }, | ||
| }) | ||
| const maxTurn = maxRes.assistant_answers?.[0]?.turn_index ?? 0 | ||
| nextTurnIndex = (typeof maxTurn === 'number' ? maxTurn : 0) + 1 | ||
| } catch { | ||
| return | ||
| } | ||
|
|
||
| // 5) insert (idempotent by message_id) | ||
| const insert = /* GraphQL */ ` | ||
| mutation InsertAssistantAnswer($object: assistant_answers_insert_input!) { | ||
| insert_assistant_answers_one( | ||
| object: $object | ||
| on_conflict: { | ||
| constraint: assistant_answers_message_id_uq | ||
| update_columns: [] | ||
| } | ||
| ) { | ||
| id | ||
| } | ||
| } | ||
| ` | ||
|
|
||
| // Build the base insert object with only known-stable fields. | ||
| const baseObject: Record<string, unknown> = { | ||
| thread_id: threadId, | ||
| message_id: messageId, | ||
| turn_index: nextTurnIndex, | ||
| content, | ||
| content_hash: contentHash, | ||
| embedding: embeddingText, | ||
| } | ||
|
|
||
| // Add scope fields only when provided (undefined will be omitted by JSON.stringify) | ||
| if (scopeType !== undefined) baseObject.scope_type = scopeType | ||
| if (scopeType !== undefined) baseObject.scope_id = scopeId ?? null | ||
|
|
||
| // Add doc_node_id only when present (new column during rollout) | ||
| if (docNodeId) baseObject.doc_node_id = docNodeId | ||
|
|
||
| /** | ||
| * Backward-compatible write strategy: | ||
| * - Prefer writing doc_node_id when available. | ||
| * - If Hasura schema is not updated yet (unknown field), retry once without doc_node_id. | ||
| * - Fail-closed otherwise. | ||
| */ | ||
| try { | ||
| await hasuraGql(insert, { object: baseObject }) | ||
| const nextTurnIndex = (lastEntry?.turnIndex ?? 0) + 1 | ||
|
|
||
| // 5) insert (idempotent by message_id) | ||
| await db | ||
| .insert(assistantAnswers) | ||
| .values({ | ||
| content, | ||
| scopeId, | ||
| threadId, | ||
| docNodeId, | ||
| embedding, // Pass raw number[] array, Drizzle/pg handles vector formatting | ||
| messageId, | ||
| contentHash, | ||
| turnIndex: nextTurnIndex, | ||
| scopeType: (scopeType || 'chat_thread') as ScopeType, // Default to chat_thread if undefined | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/pro-web/app/actions/chat-memory/ingest-assistant-answer.tsRepository: bitcashorg/masterbots
Length of output: 3281
🏁 Script executed:
# Search for ScopeType definition
rg -n "type ScopeType|enum ScopeType" -A 5 appsRepository: bitcashorg/masterbots
Length of output: 519
🏁 Script executed:
# Find where ingestAssistantAnswer is called from
rg -n "ingestAssistantAnswer" -B 2 -A 2 apps --type ts --type tsxRepository: bitcashorg/masterbots
Length of output: 92
🏁 Script executed:
# Search for call sites of ingestAssistantAnswer
rg -n "ingestAssistantAnswer" -B 2 -A 2 appsRepository: bitcashorg/masterbots
Length of output: 3239
🏁 Script executed:
# Search for where scopeType values are passed to ingestAssistantAnswer
rg -n "scopeType\s*:" -B 3 -A 1 apps --type tsRepository: bitcashorg/masterbots
Length of output: 26172
Add type guard to validate scopeType before insert.
scopeType is string | null but is cast to ScopeType without validation. Unexpected values will fail the database insert and silently skip ingestion due to the try-catch wrapper, degrading RAG memory. Validate the value explicitly before use.
✅ Suggested guard + normalization
const { threadId, messageId, content, scopeType, scopeId, docNodeId } = args
+
+ const normalizedScopeType: ScopeType =
+ scopeType === 'chat_thread' ||
+ scopeType === 'doc_root' ||
+ scopeType === 'heading_thread'
+ ? scopeType
+ : 'chat_thread'
if (
!threadId ||
!messageId ||
!content ||
- (scopeType === 'heading_thread' && !scopeId)
+ (normalizedScopeType === 'heading_thread' && !scopeId)
)
return
...
- scopeType: (scopeType || 'chat_thread') as ScopeType,
+ scopeType: normalizedScopeType,🤖 Prompt for AI Agents
In `@apps/pro-web/app/actions/chat-memory/ingest-assistant-answer.ts` around lines
34 - 85, The code casts scopeType to ScopeType without validating its value
before calling db.insert(assistantAnswers), which can cause DB insert failures;
add a type guard that checks scopeType against the allowed ScopeType values (or
enum/union) and normalize it: if scopeType is null/undefined use 'chat_thread',
if it is one of the permitted ScopeType strings use it, otherwise bail/skip or
map to a safe default before the insert; implement this check near where
scopeType is read (the variables destructure and before computing
nextTurnIndex/values) and replace the direct cast (scopeType as ScopeType) with
the validated/normalized variable.
| const lastEntry = await db.query.assistantAnswers.findFirst({ | ||
| where: eq(assistantAnswers.threadId, threadId), | ||
| orderBy: [desc(assistantAnswers.turnIndex)], | ||
| columns: { turnIndex: true }, | ||
| }) | ||
| const maxTurn = maxRes.assistant_answers?.[0]?.turn_index ?? 0 | ||
| nextTurnIndex = (typeof maxTurn === 'number' ? maxTurn : 0) + 1 | ||
| } catch { | ||
| return | ||
| } | ||
|
|
||
| // 5) insert (idempotent by message_id) | ||
| const insert = /* GraphQL */ ` | ||
| mutation InsertAssistantAnswer($object: assistant_answers_insert_input!) { | ||
| insert_assistant_answers_one( | ||
| object: $object | ||
| on_conflict: { | ||
| constraint: assistant_answers_message_id_uq | ||
| update_columns: [] | ||
| } | ||
| ) { | ||
| id | ||
| } | ||
| } | ||
| ` | ||
|
|
||
| // Build the base insert object with only known-stable fields. | ||
| const baseObject: Record<string, unknown> = { | ||
| thread_id: threadId, | ||
| message_id: messageId, | ||
| turn_index: nextTurnIndex, | ||
| content, | ||
| content_hash: contentHash, | ||
| embedding: embeddingText, | ||
| } | ||
|
|
||
| // Add scope fields only when provided (undefined will be omitted by JSON.stringify) | ||
| if (scopeType !== undefined) baseObject.scope_type = scopeType | ||
| if (scopeType !== undefined) baseObject.scope_id = scopeId ?? null | ||
|
|
||
| // Add doc_node_id only when present (new column during rollout) | ||
| if (docNodeId) baseObject.doc_node_id = docNodeId | ||
|
|
||
| /** | ||
| * Backward-compatible write strategy: | ||
| * - Prefer writing doc_node_id when available. | ||
| * - If Hasura schema is not updated yet (unknown field), retry once without doc_node_id. | ||
| * - Fail-closed otherwise. | ||
| */ | ||
| try { | ||
| await hasuraGql(insert, { object: baseObject }) | ||
| const nextTurnIndex = (lastEntry?.turnIndex ?? 0) + 1 | ||
|
|
||
| // 5) insert (idempotent by message_id) | ||
| await db | ||
| .insert(assistantAnswers) | ||
| .values({ | ||
| content, | ||
| scopeId, | ||
| threadId, | ||
| docNodeId, | ||
| embedding, // Pass raw number[] array, Drizzle/pg handles vector formatting | ||
| messageId, | ||
| contentHash, | ||
| turnIndex: nextTurnIndex, | ||
| scopeType: (scopeType || 'chat_thread') as ScopeType, // Default to chat_thread if undefined | ||
| }) | ||
| .onConflictDoNothing({ | ||
| target: assistantAnswers.messageId, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check schema constraints for assistantAnswers table
fd -e ts -e tsx "schema" packages/mb-drizzle --max-depth 3 | head -20Repository: bitcashorg/masterbots
Length of output: 106
🏁 Script executed:
# Look for assistantAnswers schema definition
rg -n "assistantAnswers|assistant_answers" packages/mb-drizzle/src/drizzle/schema.ts -A 20 -B 2Repository: bitcashorg/masterbots
Length of output: 3406
🏁 Script executed:
# Check transaction patterns in existing server actions
rg -n "db\.transaction|transaction\(" apps/pro-web/app/actions --max-count 10Repository: bitcashorg/masterbots
Length of output: 149
🏁 Script executed:
# Check how turnIndex is used across the codebase
rg -n "turnIndex" apps/pro-web -g '!**/*.test.*' -B 2 -A 2Repository: bitcashorg/masterbots
Length of output: 5724
🏁 Script executed:
# Look at the full ingest-assistant-answer.ts file to understand context
cat -n apps/pro-web/app/actions/chat-memory/ingest-assistant-answer.ts | head -100Repository: bitcashorg/masterbots
Length of output: 3281
Wrap turnIndex allocation in a transaction to prevent duplicate ordering indices under concurrent writes.
The read-then-insert sequence at lines 64–88 can assign the same turnIndex to multiple answers in the same thread under concurrent calls:
- Thread A reads the max
turnIndex, computesnextTurnIndex - Thread B reads the max
turnIndexbefore A inserts, computes the samenextTurnIndex - Both inserts succeed (because
onConflictDoNothingonly guardsmessageId, notturnIndex) - Result: two rows with identical
turnIndexfor the same thread, breaking ordering semantics used in RAG memory retrieval
Since no unique constraint exists on (threadId, turnIndex), wrap the query and insert in a Drizzle transaction and use a per-thread lock (e.g., advisory lock) to ensure serial allocation:
await db.transaction(async (tx) => {
// Lock to enforce serial turnIndex allocation per thread
// Insert with transaction guarantee
})
This pattern is already used in admin.actions.ts (line 150) for similar multi-step operations requiring atomicity.
🤖 Prompt for AI Agents
In `@apps/pro-web/app/actions/chat-memory/ingest-assistant-answer.ts` around lines
64 - 88, Wrap the read-then-insert for turnIndex allocation inside a Drizzle
transaction and take a per-thread advisory lock to serialize allocations: move
the query of assistantAnswers.findFirst and the subsequent db.insert(...) into a
single await db.transaction(async (tx) => { ... }) block, run a tx-level
advisory lock keyed by threadId (e.g., pg_advisory_xact_lock using tx.execute or
tx.raw SQL) before computing nextTurnIndex, then perform the insert on tx
(keeping the .onConflictDoNothing on assistantAnswers.messageId) so concurrent
requests for the same thread cannot compute the same nextTurnIndex; update
references to use tx.query/tx.insert instead of db.query/db.insert and default
scopeType logic remains unchanged.
| if (scopeType === 'heading_thread' && !scopeId) return [] | ||
|
|
||
| // Scope-aware retrieval (optional for backwards compatibility) | ||
| scopeType?: AssistantAnswerScopeType | null | ||
| scopeId?: string | null | ||
| // 1) Embed query | ||
| let embedding: number[] | ||
| try { | ||
| embedding = await embedText1536(query) | ||
| } catch (e) { | ||
| console.error('[retrieveMemories] Embedding failed:', e) | ||
| return [] | ||
| } | ||
| if (embedding.length !== 1536) return [] | ||
|
|
||
| /** | ||
| * Canonical root doc node id. | ||
| * Optional during rollout; used to enable (doc_node_id + scope_id) reads with legacy fallback. | ||
| */ | ||
| docNodeId?: string | null | ||
| /** Allow snake_case during migration */ | ||
| doc_node_id?: string | null | ||
| const runSearchArgs = { | ||
| scopeType, | ||
| scopeId, | ||
| docNodeId, | ||
| threadId, | ||
| embedding, | ||
| minCosine, | ||
| k, | ||
| excludeMessageId: lastAssistantMessageId, | ||
| } | ||
|
|
||
| // Commit 9: only used when scopeType === 'heading_thread' | ||
| enableDocRootFallback?: boolean | ||
| // Legacy doc_root scope_id (often == docNodeId); leave null to derive from docNodeId | ||
| docRootScopeId?: string | null | ||
| } | ||
| // 3) Primary Search (Scoped) | ||
| let memories = await runSearch(runSearchArgs) | ||
|
|
||
| type SearchRow = { | ||
| message_id: string | ||
| turn_index: number | ||
| content: string | ||
| cosine: number | ||
| created_at?: string | ||
| } | ||
| // Deduplicate if dual-match returned same messages (unlikely if unique constraint, but logic might overlap) | ||
| const uniqueMemories = new Map<string, AssistantAnswerMemory>() | ||
| for (const m of memories) uniqueMemories.set(m.messageId, m) | ||
| memories = Array.from(uniqueMemories.values()) | ||
|
|
||
| const DEFAULT_K = 3 | ||
| const DEFAULT_MIN_COSINE = 0.6 | ||
| const DEFAULT_TOKEN_BUDGET = 600 | ||
| memories = enforceTokenBudget(memories, tokenBudget) | ||
|
|
||
| function cleanNullableText(v: unknown): string | null { | ||
| if (typeof v !== 'string') return v == null ? null : String(v).trim() || null | ||
| const t = v.trim() | ||
| return t ? t : null | ||
| if ( | ||
| enableDocRootFallback && | ||
| scopeType === 'heading_thread' && | ||
| memories.length === 0 | ||
| ) { | ||
| const fallbackMemories = await runSearch({ | ||
| ...runSearchArgs, | ||
| scopeType: 'doc_root', | ||
| scopeId: docNodeId, | ||
| }) | ||
| memories = enforceTokenBudget(fallbackMemories, tokenBudget) | ||
| } |
There was a problem hiding this comment.
Guard against missing docNodeId to prevent cross‑scope memory matches.
If scopeType is heading_thread/doc_root but docNodeId is empty, runSearch falls back to isNull filters and the doc‑root fallback can pull unrelated memories. Add a defensive guard and only run the fallback when docNodeId is present.
🛡️ Suggested guard
- if (scopeType === 'heading_thread' && !scopeId) return []
+ if (scopeType === 'heading_thread' && !scopeId) return []
+ if ((scopeType === 'heading_thread' || scopeType === 'doc_root') && !docNodeId?.trim()) {
+ return []
+ }
...
- if (
- enableDocRootFallback &&
- scopeType === 'heading_thread' &&
- memories.length === 0
- ) {
+ if (
+ enableDocRootFallback &&
+ scopeType === 'heading_thread' &&
+ memories.length === 0 &&
+ docNodeId?.trim()
+ ) {Also applies to: 128-165
🤖 Prompt for AI Agents
In `@apps/pro-web/app/actions/chat-memory/retrieve-assistant-answer-memories.ts`
around lines 39 - 83, The code currently allows searching with scopeType
'heading_thread'/'doc_root' even when docNodeId is missing, which can cause
cross‑scope matches; update the logic around runSearch and the doc_root fallback
to first guard that docNodeId is present (non-empty) before performing the
primary scoped search or the doc_root fallback: check scopeType (e.g.,
'heading_thread' / 'doc_root') and require docNodeId before calling
runSearch/creating runSearchArgs and before executing the enableDocRootFallback
fallback path that builds fallbackMemories; ensure you only spread runSearchArgs
and set scopeType: 'doc_root' when docNodeId exists, and keep subsequent
deduplication (uniqueMemories) and enforceTokenBudget unchanged.
| if (!scope?.docNodeId) { | ||
| return { | ||
| isValid: scope?.scopeType === 'chat_thread', | ||
| error: { | ||
| type: 'error', | ||
| text: 'No document node ID provided for scope', | ||
| }, | ||
| enrichedChatOptions: chatRequestOptions || {}, | ||
| optimisticThread: {} as Thread, | ||
| nextOpMeta: DEFAULT_OP_META, | ||
| effectiveWorkspacePrompts: workspacePrompts || [], | ||
| docContextIncluded: false, | ||
| headingContentVersion: 0, | ||
| scopeId: null, | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the full function signature and understand the context
cat apps/pro-web/lib/chat/chat-context-utils.ts | head -100Repository: bitcashorg/masterbots
Length of output: 2872
🏁 Script executed:
# Let's find the PrepareChatContextParams type definition
rg -n "PrepareChatContextParams" apps/pro-web/lib/chat/chat-context-utils.ts -A 20Repository: bitcashorg/masterbots
Length of output: 1300
🏁 Script executed:
# Find MbScopeInput type definition
rg -n "type MbScopeInput|interface MbScopeInput" apps/pro-web -g '*.ts*'Repository: bitcashorg/masterbots
Length of output: 133
🏁 Script executed:
# Get the full MbScopeInput type definition
cat apps/pro-web/types/operation-meta.ts | sed -n '31,50p'Repository: bitcashorg/masterbots
Length of output: 341
🏁 Script executed:
# Find all call sites of prepareMbChatContext
rg -n "prepareMbChatContext\(" apps/pro-web -g '*.ts*' -B 2 -A 3Repository: bitcashorg/masterbots
Length of output: 880
🏁 Script executed:
# Get more context around the call site in use-mb-chat.tsx
cat apps/pro-web/lib/hooks/use-mb-chat.tsx | sed -n '900,950p'Repository: bitcashorg/masterbots
Length of output: 1198
🏁 Script executed:
# Check if there are other places where appendWithMbContextPrompts is called
rg -n "appendWithMbContextPrompts" apps/pro-web -B 3 -A 3Repository: bitcashorg/masterbots
Length of output: 17825
🏁 Script executed:
# Let's check the baseScope definition in chat-panel-pro.tsx
cat apps/pro-web/components/routes/pro/chat-panel-pro.tsx | sed -n '200,310p'Repository: bitcashorg/masterbots
Length of output: 3138
🏁 Script executed:
# Get the full context of the other appendWithMbContextPrompts call sites in use-mb-chat.tsx
cat apps/pro-web/lib/hooks/use-mb-chat.tsx | sed -n '1025,1070p'Repository: bitcashorg/masterbots
Length of output: 1049
🏁 Script executed:
# Let's check if scope is ever called without being passed (undefined/null case)
rg -n "appendWithMbContextPrompts\(" apps/pro-web -A 10 | grep -A 10 "scope"Repository: bitcashorg/masterbots
Length of output: 1591
🏁 Script executed:
# Check if there are any calls without scope argument (last argument)
rg -n "appendWithMbContextPrompts\(" apps/pro-web -A 8 | grep -B 5 -A 3 "^\)"Repository: bitcashorg/masterbots
Length of output: 88
🏁 Script executed:
# Let's look at all calls and check if scope is always provided
rg -n "appendWithMbContextPrompts" apps/pro-web -A 12 -B 1 | grep -E "appendWithMbContextPrompts|scope|undefined"Repository: bitcashorg/masterbots
Length of output: 6731
Make scope required in the function signature or document the optional behavior explicitly.
While the code defensively handles missing docNodeId by allowing chat_thread scopes to proceed, the function signature incorrectly declares scope as optional. In practice, all 4 call sites always provide a scope object with required fields. Either make scope required at the type level to match actual usage, or if it's intentionally optional, add JSDoc clarifying that chat_thread flows can omit docNodeId.
🤖 Prompt for AI Agents
In `@apps/pro-web/lib/chat/chat-context-utils.ts` around lines 64 - 79, The
function currently accepts an optional parameter named scope but then branches
on scope?.docNodeId; make the parameter non-optional to match callers by
updating the function signature to require scope (and its required fields like
docNodeId/scopeType) and adjust any type/interface (e.g., Scope type) used by
this function, or if you intentionally allow missing scope, add a JSDoc above
this function documenting that scope may be omitted for chat_thread flows and
explaining the behavior when docNodeId is absent; reference the parameter name
scope, the docNodeId property, and the chat_thread branch in your changes so
callers and type definitions are updated accordingly.
| export const DEFAULT_OP_META: OperationMeta = { | ||
| uiMode: 'chat', | ||
| scopeType: 'chat_thread', | ||
| scopeId: undefined, | ||
| docNodeId: `${nanoid(4)}:${nanoid(5)}`, // (projectNameSlug:documentNameSlug) | ||
| actionType: 'ask', | ||
| excludeFromQChain: false, | ||
| ragEligible: true, | ||
| mutationAllowed: false, | ||
| mutationApplied: false, | ||
| intentSource: 'freeform', | ||
| docContextIncluded: false, | ||
| headingContentVersion: 0, | ||
| docContextPolicy: 'always_include_latest_heading', | ||
| } |
There was a problem hiding this comment.
Avoid a shared docNodeId in DEFAULT_OP_META.
nanoid() runs at module load, so docNodeId is static across imports. If callers reuse or shallow-copy DEFAULT_OP_META without overriding, doc scopes can collide. Prefer a factory that generates a fresh object per use.
🛠️ Suggested fix
-export const DEFAULT_OP_META: OperationMeta = {
+export const createDefaultOpMeta = (): OperationMeta => ({
uiMode: 'chat',
scopeType: 'chat_thread',
scopeId: undefined,
docNodeId: `${nanoid(4)}:${nanoid(5)}`, // (projectNameSlug:documentNameSlug)
actionType: 'ask',
excludeFromQChain: false,
ragEligible: true,
mutationAllowed: false,
mutationApplied: false,
intentSource: 'freeform',
docContextIncluded: false,
headingContentVersion: 0,
docContextPolicy: 'always_include_latest_heading',
-}
+})🤖 Prompt for AI Agents
In `@apps/pro-web/lib/constants/rag/operations.ts` around lines 4 - 18,
DEFAULT_OP_META currently contains a module-evaluated docNodeId (generated by
nanoid at import time), causing a shared static value; replace this constant
with a factory that returns a fresh OperationMeta object each time (e.g.,
createDefaultOpMeta or makeDefaultOperationMeta) that sets docNodeId to a new
`${nanoid(4)}:${nanoid(5)}` on each call, keep other fields identical, and
update call sites to use the factory instead of importing DEFAULT_OP_META
directly so callers get unique docNodeId values.
| const handleFinish = async () => { | ||
| const newThread: Thread | undefined = activeThread | ||
| ? { | ||
| ...activeThread, | ||
| messages: [...messages, ...(newThreadMessages as any)], | ||
| messages: [ | ||
| ...messages, | ||
| ...newThreadMessages, | ||
| ] as Thread['messages'], | ||
| metadata: |
There was a problem hiding this comment.
Filter out system prompts when merging thread messages.
messages from useChat includes system prompts. Merging them into thread.messages can pollute the thread state and any downstream consumers that expect only user/assistant messages. Consider filtering them before concatenation.
🧹 Suggested fix
- const newThread: Thread | undefined = activeThread
+ const nonSystemMessages = messages.filter((m) => m.role !== 'system')
+ const newThread: Thread | undefined = activeThread
? {
...activeThread,
- messages: [
- ...messages,
- ...newThreadMessages,
- ] as Thread['messages'],
+ messages: [
+ ...nonSystemMessages,
+ ...newThreadMessages,
+ ] as Thread['messages'],
metadata: ...
}
: undefined🤖 Prompt for AI Agents
In `@apps/pro-web/lib/hooks/use-mb-chat.tsx` around lines 604 - 612, When building
the merged thread in handleFinish (the newThread object created from
activeThread), filter out system prompts from both messages and
newThreadMessages before concatenation so thread.messages contains only
user/assistant content; update the creation of newThread.messages (in
handleFinish) to use filtered arrays (e.g., filter by role !== 'system' or the
equivalent message type check) and then concatenate those filtered arrays into
Thread['messages'].
| export function makeHeadingDocNodeId( | ||
| docRootId: string | null | undefined, | ||
| sectionId: string | null | undefined, | ||
| ): string | null { | ||
| ): string { | ||
| const root = docRootId?.trim() | ||
| const sec = sectionId?.trim() | ||
| if (!root || !sec) return null | ||
| if (!root || !sec) return '' | ||
| return `${root}::${sec}` |
There was a problem hiding this comment.
Avoid empty-string doc node IDs.
Returning '' can bypass nullish coalescing (docNodeId ?? fallback) and persist an invalid ID. Prefer null (or update callers to use ||) so fallbacks still work.
✅ Suggested fix
export function makeHeadingDocNodeId(
docRootId: string | null | undefined,
sectionId: string | null | undefined,
-): string {
+): string | null {
const root = docRootId?.trim()
const sec = sectionId?.trim()
- if (!root || !sec) return ''
+ if (!root || !sec) return null
return `${root}::${sec}`
}🤖 Prompt for AI Agents
In `@apps/pro-web/lib/queries/use-workspace-structure.ts` around lines 42 - 49,
The function makeHeadingDocNodeId currently returns an empty string when inputs
are missing which can bypass nullish coalescing; change its signature to return
string | null and return null instead of '' when root or sec are falsy, keeping
the `${root}::${sec}` return for valid inputs, and update any callers that rely
on an empty-string fallback to handle null (or use `||` instead of `??`) so
fallbacks continue to work.
| })) | ||
|
|
||
| import { selectScopeConversationWindow } from './select-scope-conversation-window' | ||
| import { selectScopeConversationWindow } from '../app/actions/chat-memory/get-scope-conversation-window' |
There was a problem hiding this comment.
Update test mocks to match the Drizzle‑based implementation.
The new import targets the Drizzle‑backed action, but the test still mocks mb-genql and inspects GraphQL where clauses. This will either hit the real DB or no‑op. Please mock mb-drizzle’s db chain (or convert to an integration test) so assertions still exercise the new logic.
🤖 Prompt for AI Agents
In `@apps/pro-web/lib/select-scope-conversation-window.test.ts` at line 22, The
test still mocks mb-genql and inspects GraphQL where clauses but the
implementation now uses the Drizzle-backed action selectScopeConversationWindow;
update the test to mock the mb-drizzle export (the db chain) used by that action
instead of mb-genql: create a mock db object with the chainable methods your
action calls (e.g., db.select / db.from / db.where / db.orderBy / db.limit or
the exact method names used in selectScopeConversationWindow) and assert those
methods are called with the expected arguments, or convert the test to an
integration test; ensure the mock is injected via jest.mock('mb-drizzle', ...)
and that your assertions reference the selectScopeConversationWindow behavior
rather than GraphQL where clauses.
| export async function saveNewMessage(params: Partial<SaveNewMessageParams>) { | ||
| const { jwt, docNodeId, doc_node_id, ...object } = params | ||
| const client = getHasuraClient({ jwt }) | ||
|
|
||
| // During migration, callers may pass either camelCase or snake_case. | ||
| // Only send the camelCase field to Hasura (GraphQL naming is camelCase here). | ||
| const canonicalDocNodeId: string | null = (docNodeId ?? | ||
| doc_node_id ?? | ||
| null) as any | ||
| const canonicalDocNodeId = docNodeId || null | ||
| const insertObject = | ||
| canonicalDocNodeId == null | ||
| canonicalDocNodeId === null | ||
| ? object | ||
| : { ...object, docNodeId: canonicalDocNodeId } |
There was a problem hiding this comment.
Preserve snake_case doc_node_id during migration
Line 732 derives canonicalDocNodeId only from docNodeId, so callers still sending doc_node_id (still allowed in SaveNewMessageParams) will silently drop the association. That can orphan attachments/doc links.
🐛 Suggested fix
- const canonicalDocNodeId = docNodeId || null
+ const canonicalDocNodeId = docNodeId ?? doc_node_id ?? null🤖 Prompt for AI Agents
In `@apps/pro-web/services/hasura/hasura.service.ts` around lines 726 - 736, In
saveNewMessage, canonicalDocNodeId is derived only from docNodeId so callers
passing snake_case doc_node_id get dropped; update the logic that computes
canonicalDocNodeId to consider both docNodeId and doc_node_id (e.g., use
docNodeId ?? doc_node_id ?? null) and then build insertObject the same way
(include docNodeId only when canonicalDocNodeId is non-null) so the snake_case
value is preserved during the migration; update references to
canonicalDocNodeId, the params destructuring, and the insertObject creation
accordingly.
|
@coderabbitai this is the first part of the fixes mentioned in the PR #600 mentioned in here. I need to double-check that I am not missing any observations made. It would be great to link all related comments feedbacks and issues (though I only see this one and doesn't cover all observations). With that information, I can now take them along with the observations from this PR. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me gather the details from PR 🏁 Script executed: Length of output: 21042 Let me fetch the detailed review comments from PR 🏁 Script executed: Length of output: 39529
SummaryPR What Issue
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 291977
Key links
What
Still open or to verify (from
Proposal: open a single tracking issue that links everything
Would you like me to create that umbrella issue now and assign it to you? If yes, I’ll post it with the mapping above and add any additional comment links you want included. ✏️ Learnings added
🧠 Learnings used |
|
@coderabbitai let's proceed with option B, one issue per phase. Yes, I figured that P0 and P1 might be completed but I see the sloppy code. Drizzle schema is auto-generated, so we should instead fix the migration, so Drizzle can fix it in the generation. An additional observation (after working in the P0 and P1):
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 657
Created Issues
All issues are assigned to you and link back to:
|
@coderabbitai You didn't create the issues. I used the (fixed) filter |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 1840
Let me create the remaining 3 issues without labels: 🏁 Script executed: Length of output: 1014
Created Issues Summary
Key Details✅ All issues:
Recommended Next Steps
You can view all issues with this search: Let me know if you need any adjustments to the issues! 🧠 Learnings used |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 934
🏁 Script executed: Length of output: 1494
✅ Created Issues (Latest Set)
All issues are assigned to you and link back to: 📝 Note: Duplicate IssuesI also see older duplicates ( 🏷️ Missing LabelsThe following labels don't exist in your repository (issues were created anyway, but labels couldn't be applied):
The
🎯 Next Steps
You can view all your phase issues with: Let me know if you need any adjustments to the issues! 🧠 Learnings used |
Summary by Sourcery
Refactor chat and workspace RAG infrastructure to use shared utilities and Drizzle-based persistence, tighten operation metadata and scoping semantics, and update message handling to be type-safe and robust across chat, workspace, and document workflows.
New Features:
Bug Fixes:
Enhancements:
Build:
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Improvements