feat(welcome): parallel template messages with LLM inference for fast…#616
feat(welcome): parallel template messages with LLM inference for fast…#616M3gA-Mind wants to merge 1 commit intotinyhumansai:mainfrom
Conversation
…er perceived welcome (tinyhumansai#592) Show two template messages immediately while the LLM runs in the background, cutting the perceived wait from ~15s to ~0s: - Template 1 (t≈0ms): time-of-day greeting that names any connected channels, built from the status snapshot without extra I/O - Template 2 (t=4s): "Getting everything ready for you..." loading indicator, published via tokio::join! alongside the LLM future - LLM response: published when inference completes, opened directly with personalised setup content (no duplicate greeting) `run_proactive_welcome` now fires three `ProactiveMessageRequested` events rather than one. The two template helpers (`time_of_day_greeting`, `build_template_greeting`) are pure functions covered by 6 new unit tests. `prompt.md` gains a proactive-invocation section explaining that greeting templates are pre-delivered and the agent must skip them.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 48 minutes and 21 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…tinyhumansai#465) Backend simplified billing model: fiveHourSpendUsd → cycleLimit5hr, bypassRateLimit → bypassCycleLimit, removed dailyUsage and token count fields. Updates all frontend consumers and mock API.
Summary
tokio::join!, running in parallel with LLM inference rather than blocking itrun_singleexplicitly tells the agent that both templates have already been shown and to open directly with the personalised setup summaryChanged files
src/openhuman/agent/welcome_proactive.rsrun_proactive_welcometo publish 3 events; addedtime_of_day_greeting()andbuild_template_greeting()helperssrc/openhuman/agent/agents/welcome/prompt.mdTest plan
cargo test -p openhuman welcome_proactive— 7 unit tests cover greeting text for 0, 1, 2, 3 channels and the missing-key fallbackcargo checkpasses clean;cargo fmt --checkpasses