⚡ Bolt: Concurrent LLM Tool Execution using asyncio.gather#107
⚡ Bolt: Concurrent LLM Tool Execution using asyncio.gather#107ishaanxgupta wants to merge 1 commit intomainfrom
Conversation
Optimization targeting `RetrievalPipeline` and `CodeRetrievalPipeline`. Previously, multiple tool calls returned by the LLM were processed sequentially in a loop, meaning the total duration was the sum of all individual tool latencies. This change leverages `asyncio.gather` to execute these external IO operations (e.g. Pinecone semantic search, Neo4j traversals) concurrently, bounding the latency to the longest single request while preserving order for accurate context assembly.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
💡 What: Modified
CodeRetrievalPipelineandRetrievalPipelineto execute their respective LLM tool calls (search_symbols,search_files,search_annotations,impact_analysis, etc.) concurrently usingasyncio.gatherinstead of sequentially processing them in aforloop.🎯 Why: During the retrieval phases, the LLM frequently responds with 2-4 tool calls (e.g., searching temporal + profile simultaneously, or querying both files and symbols for a module). Executing these sequentially created a performance bottleneck where the total latency became the sum of all individual requests.
📊 Impact: Expected to reduce the total latency of multi-tool retrieval turns by 40-60%. Instead of
Latency(T1) + Latency(T2), the turn time is now approximatelymax(Latency(T1), Latency(T2)).🔬 Measurement: Verify the performance improvements by observing the log durations emitted by the pipeline for multiple tool calls in a single turn. The total pipeline execution time should be notably reduced when the LLM makes 2+ tool calls.
PR created automatically by Jules for task 1358703756407159199 started by @ishaanxgupta