Add issue auto-fix workflow and AI helper fixes#29
Conversation
Reviewer's GuideAdds a GitHub Actions "Auto-fix Issues" workflow and hook script, introduces a shared AI provider entry point via Bot(ai_provider=...) and ctx.ai(...), implements per-guild/per-user rate limiting and localized thinking messages for AIPlugin/OpenClaudePlugin, and documents/tests the new behavior. Sequence diagram for ctx.ai helper with optional shared provider and model overridesequenceDiagram
actor User
participant Discord
participant Bot
participant InteractionContext as Context
participant Provider as AIProvider
User->>Discord: Invoke slash command
Discord->>Bot: Interaction payload
Bot->>InteractionContext: Create context
User->>InteractionContext: await ctx.ai(prompt, provider=None, model)
InteractionContext->>InteractionContext: Resolve provider = provider or Bot.ai_provider
InteractionContext->>InteractionContext: Check provider is not None
alt model is None or provider has no _model
InteractionContext->>Provider: query(prompt)
Provider-->>InteractionContext: response_text
else model override
InteractionContext->>Provider: get _easycord_model_lock or create
InteractionContext->>Provider: acquire lock
InteractionContext->>Provider: set _model = model
InteractionContext->>Provider: query(prompt)
Provider-->>InteractionContext: response_text
InteractionContext->>Provider: restore _model
InteractionContext->>Provider: release lock
end
InteractionContext-->>User: response_text
Sequence diagram for AIPlugin.ask with rate limiting and localized thinkingsequenceDiagram
actor User
participant Discord
participant AIPlugin
participant Provider as AIProvider
participant Ctx as Context
User->>Discord: /ask prompt
Discord->>AIPlugin: invoke ask(ctx, prompt)
AIPlugin->>AIPlugin: retry_after = _rate_limit_retry_after(ctx)
alt rate limited
AIPlugin->>Ctx: respond(t("ai.rate_limited"), ephemeral=True)
AIPlugin-->>Discord: return
else allowed
alt thinking_key configured
AIPlugin->>Ctx: respond(t(thinking_key, default="Thinking..."))
else no thinking_key
AIPlugin->>Ctx: defer()
end
AIPlugin->>Provider: query(prompt)
Provider-->>AIPlugin: response_text
AIPlugin->>AIPlugin: response = _format_response(response_text)
alt thinking_key configured
AIPlugin->>Ctx: edit_response(response)
else no thinking_key
AIPlugin->>Ctx: respond(response)
end
end
Updated class diagram for Bot, context AI helper, and AI pluginsclassDiagram
class AIProvider {
<<interface>>
+query(prompt: str) str
+_model: str
+_easycord_model_lock
}
class Bot {
+ai_provider: AIProvider
+__init__(intents, auto_sync, load_builtin_plugins, database, db_backend, db_path, db_auto_sync_guilds, ai_provider, kwargs)
}
class InteractionContextBase {
+interaction
+defer(ephemeral: bool)
+respond(content)
+edit_response(content)
+t(key, default, seconds)
+ai(prompt: str, provider, model: str) str
}
class AIPlugin {
-_provider: AIProvider
-_rate_limit: int
-_rate_window: float
-_thinking_key: str
-_requests: dict~tuple~int,int~~, list~float~~
+__init__(provider: AIProvider, rate_limit: int, rate_window: float, thinking_key: str)
+_rate_limit_retry_after(ctx) float
+ask(ctx, prompt: str) void
+_format_response(text: str) str
}
class OpenClaudePlugin {
+__init__(api_key: str, model: str, rate_limit: int, rate_window: float)
}
Bot o--> AIProvider : ai_provider
InteractionContextBase --> Bot : interaction.client
AIPlugin --> AIProvider : uses
OpenClaudePlugin --|> AIPlugin
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Review Summary by QodoAdd AI helper, rate limiting, and auto-fix workflow
WalkthroughsDescription• Add ctx.ai(...) helper for querying AI providers with optional model override • Implement per-user, per-guild rate limiting for AIPlugin and OpenClaudePlugin • Make OpenClaudePlugin show localized thinking message before editing response • Add Auto-fix Issues GitHub workflow for mechanical repository fixes • Extend Bot to accept and store ai_provider configuration parameter Diagramflowchart LR
Bot["Bot(ai_provider=...)"]
CtxAI["ctx.ai(prompt, model=...)"]
AIPlugin["AIPlugin(rate_limit, rate_window)"]
OpenClaude["OpenClaudePlugin(thinking_key)"]
Workflow["Auto-fix Issues Workflow"]
Bot -- "stores provider" --> CtxAI
CtxAI -- "queries" --> AIPlugin
AIPlugin -- "enforces limits" --> OpenClaude
Workflow -- "runs fixes" --> Bot
File Changes1. easycord/_context_base.py
|
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis pull request introduces AI querying capabilities to the framework. It adds a new Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Context as Context.ai()
participant Bot as Bot Instance
participant AIProvider as AI Provider
participant RateLimit as Rate Limiter
User->>Context: Call ctx.ai(prompt, model=...)
Context->>Bot: Resolve ai_provider
Context->>RateLimit: Check rate limit (user/guild)
alt Rate Limited
RateLimit-->>Context: Return cooldown retry time
Context->>User: Respond ephemeral with ai.rate_limited
else Within Limit
RateLimit->>RateLimit: Update request count
alt Model Override Provided
Context->>AIProvider: Acquire lock, set _model
end
Context->>AIProvider: Call provider.query(prompt)
AIProvider-->>Context: Return response string
alt Model Override Provided
Context->>AIProvider: Release lock, restore original _model
end
Context->>User: Respond with query result
end
sequenceDiagram
participant User
participant SlashCmd as /ask Command
participant Context as Context Instance
participant Localization as Localization (t)
participant AIProvider as Claude Provider
User->>SlashCmd: Invoke /ask with prompt
SlashCmd->>Context: Check rate limit
alt Rate Limited
Context-->>User: Ephemeral ai.rate_limited message
else Within Limit
SlashCmd->>Context: Call ctx.t("openclaude.thinking")
Context->>Localization: Fetch localized thinking message
Localization-->>Context: Return thinking_text
SlashCmd->>User: Respond with thinking_text (ephemeral/local)
SlashCmd->>AIProvider: Query provider with prompt
AIProvider-->>SlashCmd: Return response
SlashCmd->>User: Edit response with formatted final result
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Code Review by Qodo
Context used 1. ctx.ai() uncaught provider errors
|
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- The in-memory rate limit store
_requestsinAIPluginonly ever shrinks per-key lists but never removes idle keys, so over a long-lived bot this can grow without bound for many guild/user combinations; consider pruning old keys or using a bounded/expiring structure. Context.aitemporarily mutatesprovider._modelon a shared provider instance, which can interfere with other concurrent code paths that use the same provider; if possible, prefer passing a model parameter through toqueryor using a provider clone/wrapper instead of mutating shared state.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The in-memory rate limit store `_requests` in `AIPlugin` only ever shrinks per-key lists but never removes idle keys, so over a long-lived bot this can grow without bound for many guild/user combinations; consider pruning old keys or using a bounded/expiring structure.
- `Context.ai` temporarily mutates `provider._model` on a shared provider instance, which can interfere with other concurrent code paths that use the same provider; if possible, prefer passing a model parameter through to `query` or using a provider clone/wrapper instead of mutating shared state.
## Individual Comments
### Comment 1
<location path="easycord/plugins/openclaude.py" line_range="104-113" />
<code_context>
+ )
+ return
+
+ if self._thinking_key:
+ await ctx.respond(
+ ctx.t(self._thinking_key, default="Thinking..."),
+ )
+ else:
+ await ctx.defer()
try:
response_text = await self._provider.query(prompt)
- await ctx.respond(self._format_response(response_text))
+ response = self._format_response(response_text)
+ if self._thinking_key:
+ await ctx.edit_response(response)
+ else:
+ await ctx.respond(response)
except ImportError as exc:
await ctx.respond(
</code_context>
<issue_to_address>
**issue (bug_risk):** Mixing `respond` and `edit_response` based on `thinking_key` can break error handling after an initial response.
In the `self._thinking_key` case, the initial reply uses `ctx.respond(...)`, and the success path correctly switches to `ctx.edit_response(...)`. But the exception handlers (`ImportError`, `ValueError`, generic `Exception`) still use `ctx.respond(...)`, which can fail once the interaction has already been responded to. These handlers should use `edit_response` (or `followup.send`) when `self._thinking_key` is set, matching the success path behavior.
</issue_to_address>
### Comment 2
<location path=".github/workflows/auto-fix-issues.yml" line_range="32-40" />
<code_context>
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Resolve target branch
+ id: target
+ uses: actions/github-script@v7
+ with:
+ script: |
+ if (context.eventName === "workflow_dispatch") {
+ core.setOutput("branch", core.getInput("target_branch"));
+ core.setOutput("fix_command", core.getInput("fix_command"));
+ core.setOutput("commit_message", core.getInput("commit_message"));
+ return;
+ }
</code_context>
<issue_to_address>
**issue (bug_risk):** Reading workflow_dispatch inputs via `core.getInput` inside `github-script` is likely incorrect.
In the `workflow_dispatch` branch, this script calls `core.getInput` for `target_branch`, `fix_command`, and `commit_message`, but those are workflow-level inputs, not inputs to this `github-script` step. In `github-script` you should read them from `context.payload.inputs` (e.g. `context.payload.inputs.target_branch`) or pass them into the step via `with:`/`env:`. As written, these values will be empty, so the resolved outputs will be blank and later steps (checkout, fix command) are likely to fail.
</issue_to_address>
### Comment 3
<location path="tests/test_openclaude_plugin.py" line_range="476-485" />
<code_context>
+ ctx.edit_response.assert_called_once_with("Test response")
+
+
+@pytest.mark.asyncio
+async def test_aiplugin_rate_limits_per_user():
+ """AIPlugin.ask rate limits repeated requests per guild/user."""
+ provider = MagicMock(spec=AIProvider)
+ provider.query = AsyncMock(return_value="AI response")
+ plugin = AIPlugin(provider=provider, rate_limit=1, rate_window=60)
+ ctx = MagicMock()
+ ctx.defer = AsyncMock()
+ ctx.respond = AsyncMock()
+ ctx.t = MagicMock(return_value="Slow down")
+ ctx.user.id = 123
+ ctx.guild_id = 456
+
+ await plugin.ask(ctx, prompt="first")
+ await plugin.ask(ctx, prompt="second")
+
+ assert provider.query.await_count == 1
+ ctx.t.assert_called_with(
+ "ai.rate_limited",
+ default="You're asking too quickly. Try again in {seconds:.0f}s.",
+ seconds=pytest.approx(60, abs=1),
+ )
+ assert ctx.respond.call_args_list[-1].kwargs["ephemeral"] is True
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test to cover rate-limit window expiry behavior
This test only asserts that a second request within the window is blocked. It doesn’t verify that requests are allowed again after the window elapses. Because `_rate_limit_retry_after` depends on `time.monotonic()` and `rate_window`, a regression there could go unnoticed. Please add a test that controls `time.monotonic()` (or uses a fake clock) to:
1. Allow initial `rate_limit` calls.
2. Advance time beyond `rate_window`.
3. Assert that a subsequent call is allowed (i.e., `provider.query` is called again and no rate-limit message is sent).
Suggested implementation:
```python
ctx.user.id = 123
ctx.guild_id = 456
await plugin.ask(ctx, prompt="test")
await plugin.ask(ctx, prompt="test")
ctx = MagicMock()
ctx.defer = AsyncMock()
ctx.respond = AsyncMock()
ctx.user.id = 123
ctx.guild_id = 456
await plugin.ask(ctx, prompt="test")
@pytest.mark.asyncio
async def test_aiplugin_rate_limits_reset_after_window(monkeypatch):
"""AIPlugin.ask allows requests again after the rate-limit window expires."""
provider = MagicMock(spec=AIProvider)
provider.query = AsyncMock(return_value="AI response")
plugin = AIPlugin(provider=provider, rate_limit=1, rate_window=60)
# Control time.monotonic so we can advance time in the test
fake_time = 1000.0
def fake_monotonic() -> float:
return fake_time
# NOTE: adjust the target string below if AIPlugin's module path differs.
monkeypatch.setattr("openclaude.plugin.time.monotonic", fake_monotonic)
ctx = MagicMock()
ctx.defer = AsyncMock()
ctx.respond = AsyncMock()
ctx.t = MagicMock()
ctx.user.id = 123
ctx.guild_id = 456
# First call should be allowed
await plugin.ask(ctx, prompt="first")
assert provider.query.await_count == 1
ctx.t.assert_not_called()
# Second call within the window should be rate-limited
await plugin.ask(ctx, prompt="second")
assert provider.query.await_count == 1
ctx.t.assert_called_with(
"ai.rate_limited",
default="You're asking too quickly. Try again in {seconds:.0f}s.",
seconds=pytest.approx(60, abs=1),
)
# Advance time beyond the rate window
fake_time += 61
# After the window, a new call should be allowed again (no rate-limit message)
ctx.t.reset_mock()
await plugin.ask(ctx, prompt="third")
assert provider.query.await_count == 2
ctx.t.assert_not_called()
```
1. The `monkeypatch.setattr` target `"openclaude.plugin.time.monotonic"` assumes that:
- The AIPlugin implementation lives in `openclaude/plugin.py`, and
- It calls `time.monotonic` via an imported `time` module.
If instead the code does `from time import monotonic`, change the target to `"openclaude.plugin.monotonic"`, and if the module path differs, update `"openclaude.plugin"` accordingly.
2. This test relies on `pytest`, `MagicMock`, `AsyncMock`, `AIProvider`, and `AIPlugin` already being imported in `tests/test_openclaude_plugin.py` as in the surrounding tests. If any of these are missing, add the appropriate imports at the top of the file.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if self._thinking_key: | ||
| await ctx.respond( | ||
| ctx.t(self._thinking_key, default="Thinking..."), | ||
| ) | ||
| else: | ||
| await ctx.defer() | ||
|
|
||
| try: | ||
| response_text = await self._provider.query(prompt) | ||
| await ctx.respond(self._format_response(response_text)) | ||
| response = self._format_response(response_text) |
There was a problem hiding this comment.
issue (bug_risk): Mixing respond and edit_response based on thinking_key can break error handling after an initial response.
In the self._thinking_key case, the initial reply uses ctx.respond(...), and the success path correctly switches to ctx.edit_response(...). But the exception handlers (ImportError, ValueError, generic Exception) still use ctx.respond(...), which can fail once the interaction has already been responded to. These handlers should use edit_response (or followup.send) when self._thinking_key is set, matching the success path behavior.
| - name: Resolve target branch | ||
| id: target | ||
| uses: actions/github-script@v7 | ||
| with: | ||
| script: | | ||
| if (context.eventName === "workflow_dispatch") { | ||
| core.setOutput("branch", core.getInput("target_branch")); | ||
| core.setOutput("fix_command", core.getInput("fix_command")); | ||
| core.setOutput("commit_message", core.getInput("commit_message")); |
There was a problem hiding this comment.
issue (bug_risk): Reading workflow_dispatch inputs via core.getInput inside github-script is likely incorrect.
In the workflow_dispatch branch, this script calls core.getInput for target_branch, fix_command, and commit_message, but those are workflow-level inputs, not inputs to this github-script step. In github-script you should read them from context.payload.inputs (e.g. context.payload.inputs.target_branch) or pass them into the step via with:/env:. As written, these values will be empty, so the resolved outputs will be blank and later steps (checkout, fix command) are likely to fail.
| @pytest.mark.asyncio | ||
| async def test_openclaude_ask_defers_and_responds(): | ||
| """OpenClaudePlugin.ask defers and responds.""" | ||
| """OpenClaudePlugin.ask shows a localized thinking message and edits it.""" | ||
| plugin = OpenClaudePlugin(api_key="test-key") | ||
| ctx = MagicMock() | ||
| ctx.defer = AsyncMock() | ||
| ctx.t = MagicMock(return_value="Thinking locally...") | ||
| ctx.respond = AsyncMock() | ||
| ctx.edit_response = AsyncMock() | ||
| ctx.user.id = 123 |
There was a problem hiding this comment.
suggestion (testing): Add a test to cover rate-limit window expiry behavior
This test only asserts that a second request within the window is blocked. It doesn’t verify that requests are allowed again after the window elapses. Because _rate_limit_retry_after depends on time.monotonic() and rate_window, a regression there could go unnoticed. Please add a test that controls time.monotonic() (or uses a fake clock) to:
- Allow initial
rate_limitcalls. - Advance time beyond
rate_window. - Assert that a subsequent call is allowed (i.e.,
provider.queryis called again and no rate-limit message is sent).
Suggested implementation:
ctx.user.id = 123
ctx.guild_id = 456
await plugin.ask(ctx, prompt="test")
await plugin.ask(ctx, prompt="test")
ctx = MagicMock()
ctx.defer = AsyncMock()
ctx.respond = AsyncMock()
ctx.user.id = 123
ctx.guild_id = 456
await plugin.ask(ctx, prompt="test")
@pytest.mark.asyncio
async def test_aiplugin_rate_limits_reset_after_window(monkeypatch):
"""AIPlugin.ask allows requests again after the rate-limit window expires."""
provider = MagicMock(spec=AIProvider)
provider.query = AsyncMock(return_value="AI response")
plugin = AIPlugin(provider=provider, rate_limit=1, rate_window=60)
# Control time.monotonic so we can advance time in the test
fake_time = 1000.0
def fake_monotonic() -> float:
return fake_time
# NOTE: adjust the target string below if AIPlugin's module path differs.
monkeypatch.setattr("openclaude.plugin.time.monotonic", fake_monotonic)
ctx = MagicMock()
ctx.defer = AsyncMock()
ctx.respond = AsyncMock()
ctx.t = MagicMock()
ctx.user.id = 123
ctx.guild_id = 456
# First call should be allowed
await plugin.ask(ctx, prompt="first")
assert provider.query.await_count == 1
ctx.t.assert_not_called()
# Second call within the window should be rate-limited
await plugin.ask(ctx, prompt="second")
assert provider.query.await_count == 1
ctx.t.assert_called_with(
"ai.rate_limited",
default="You're asking too quickly. Try again in {seconds:.0f}s.",
seconds=pytest.approx(60, abs=1),
)
# Advance time beyond the rate window
fake_time += 61
# After the window, a new call should be allowed again (no rate-limit message)
ctx.t.reset_mock()
await plugin.ask(ctx, prompt="third")
assert provider.query.await_count == 2
ctx.t.assert_not_called()- The
monkeypatch.setattrtarget"openclaude.plugin.time.monotonic"assumes that:- The AIPlugin implementation lives in
openclaude/plugin.py, and - It calls
time.monotonicvia an importedtimemodule.
If instead the code doesfrom time import monotonic, change the target to"openclaude.plugin.monotonic", and if the module path differs, update"openclaude.plugin"accordingly.
- The AIPlugin implementation lives in
- This test relies on
pytest,MagicMock,AsyncMock,AIProvider, andAIPluginalready being imported intests/test_openclaude_plugin.pyas in the surrounding tests. If any of these are missing, add the appropriate imports at the top of the file.
🤖 Augment PR SummarySummary: This PR adds an automated “issue auto-fix” workflow and improves the framework’s AI integration story. Changes:
Technical Notes: The context helper temporarily mutates provider 🤖 Was this summary useful? React with 👍 or 👎 |
| with: | ||
| script: | | ||
| if (context.eventName === "workflow_dispatch") { | ||
| core.setOutput("branch", core.getInput("target_branch")); |
There was a problem hiding this comment.
.github/workflows/auto-fix-issues.yml:38 — In workflow_dispatch, core.getInput(...) reads action inputs, not workflow_dispatch inputs, so branch/fix_command/commit_message will likely be empty and the workflow won’t behave as intended. You probably want to read from context.payload.inputs / github.event.inputs instead of core.getInput.
Severity: high
Other Locations
.github/workflows/auto-fix-issues.yml:39.github/workflows/auto-fix-issues.yml:40
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
|
|
||
| jobs: | ||
| auto-fix: | ||
| if: github.event_name == 'workflow_dispatch' || contains(github.event.comment.body, '/fix-issues') |
There was a problem hiding this comment.
.github/workflows/auto-fix-issues.yml:28 — The /fix-issues trigger doesn’t check who authored the comment; any user who can comment on a PR could trigger a workflow run that has contents: write and can push commits to in-repo branches. Consider restricting this to trusted actors (e.g., repo members) to avoid abuse/spam and unexpected pushes.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| - name: Commit and push fixes | ||
| shell: bash | ||
| run: | | ||
| if git diff --quiet && git diff --cached --quiet; then |
There was a problem hiding this comment.
.github/workflows/auto-fix-issues.yml:91 — git diff --quiet ignores untracked files, so fixes that add new files could be missed and the workflow would incorrectly exit with “No fixes produced.” This can cause auto-fix changes to silently not be committed/pushed.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| response_text = await self._provider.query(prompt) | ||
| await ctx.respond(self._format_response(response_text)) | ||
| response = self._format_response(response_text) | ||
| if self._thinking_key: |
There was a problem hiding this comment.
easycord/plugins/openclaude.py:114 — When thinking_key is enabled you edit the initial “Thinking…” message only on the success path; the except blocks still call ctx.respond(...), which will send follow-ups and can leave the original thinking message permanently visible. Consider aligning error handling with the edit pattern so the initial message isn’t left stale.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| self._rate_limit = rate_limit | ||
| self._rate_window = rate_window | ||
| self._thinking_key = thinking_key | ||
| self._requests: dict[tuple[int | None, int], list[float]] = {} |
There was a problem hiding this comment.
easycord/plugins/openclaude.py:52 — _requests retains an entry per (guild,user) indefinitely; even though you filter old timestamps, the dict keys themselves are never removed for inactive users, which can lead to unbounded growth in long-running bots. Consider pruning empty/expired keys to avoid a slow memory leak.
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| async def ai(self, prompt: str, *, provider=None, model: str | None = None) -> str: | ||
| """Query the configured AI provider and return response text. | ||
|
|
||
| Pass ``provider=...`` for one-off calls, or configure ``Bot(ai_provider=...)`` | ||
| so commands can call ``await ctx.ai("...")`` directly. | ||
| """ | ||
| provider = provider or getattr(self.interaction.client, "ai_provider", None) | ||
| if provider is None: | ||
| raise RuntimeError("No AI provider configured. Pass provider=... or set Bot(ai_provider=...).") | ||
|
|
||
| old_model = getattr(provider, "_model", None) | ||
| should_restore = model is not None and hasattr(provider, "_model") | ||
| if not should_restore: | ||
| return await provider.query(prompt) | ||
|
|
||
| lock = getattr(provider, "_easycord_model_lock", None) | ||
| if lock is None: | ||
| lock = asyncio.Lock() | ||
| provider._easycord_model_lock = lock | ||
| async with lock: | ||
| provider._model = model | ||
| try: | ||
| return await provider.query(prompt) | ||
| finally: | ||
| provider._model = old_model |
There was a problem hiding this comment.
1. ctx.ai() uncaught provider errors 📎 Requirement gap ☼ Reliability
ctx.ai() directly awaits provider.query() and can raise exceptions (including missing provider) that bubble up and may crash the command instead of surfacing a user-safe error. It also provides no integration point for consistent rate limiting, so custom commands using ctx.ai() can bypass the shared AI rate limiting behavior.
Agent Prompt
## Issue description
`ctx.ai()` can raise unhandled exceptions from missing provider configuration and from `provider.query()`, and it has no shared rate limiting integration point.
## Issue Context
Compliance requires `ctx.ai()` to be a safe, unified entry point with built-in error handling and a consistent place to enforce/govern rate limits.
## Fix Focus Areas
- easycord/_context_base.py[137-161]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| from easycord.plugins import OpenAIProvider | ||
|
|
||
| bot = Bot(ai_provider=OpenAIProvider(api_key="sk-...")) | ||
|
|
||
| @bot.slash(description="Ask AI") | ||
| async def ask(ctx, prompt: str): | ||
| response = await ctx.ai(prompt, model="gpt-4o") | ||
| await ctx.respond(response[:2000]) |
There was a problem hiding this comment.
2. Docs show api_key= literals 📘 Rule violation ⛨ Security
Documentation examples add inline api_key="sk-..." values, which violates the requirement to avoid hardcoded secrets and to use environment variables/secure configuration for credentials. Even as placeholders, these examples encourage embedding API keys in code.
Agent Prompt
## Issue description
Docs include inline `api_key="sk-..."` examples, which conflicts with the no-hardcoded-secrets requirement.
## Issue Context
Examples should demonstrate reading API keys from environment variables (e.g., `os.getenv(...)`) or documented secure configuration patterns.
## Fix Focus Areas
- README.md[109-116]
- docs/api.md[662-673]
- docs/examples.md[84-95]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| async def ai(self, prompt: str, *, provider=None, model: str | None = None) -> str: | ||
| """Query the configured AI provider and return response text. | ||
|
|
||
| Pass ``provider=...`` for one-off calls, or configure ``Bot(ai_provider=...)`` | ||
| so commands can call ``await ctx.ai("...")`` directly. | ||
| """ | ||
| provider = provider or getattr(self.interaction.client, "ai_provider", None) | ||
| if provider is None: | ||
| raise RuntimeError("No AI provider configured. Pass provider=... or set Bot(ai_provider=...).") | ||
|
|
||
| old_model = getattr(provider, "_model", None) | ||
| should_restore = model is not None and hasattr(provider, "_model") | ||
| if not should_restore: | ||
| return await provider.query(prompt) | ||
|
|
||
| lock = getattr(provider, "_easycord_model_lock", None) | ||
| if lock is None: | ||
| lock = asyncio.Lock() | ||
| provider._easycord_model_lock = lock | ||
| async with lock: | ||
| provider._model = model | ||
| try: | ||
| return await provider.query(prompt) | ||
| finally: | ||
| provider._model = old_model |
There was a problem hiding this comment.
3. Ctx.ai model race 🐞 Bug ≡ Correctness
BaseContext.ai() mutates provider._model under a lock only for override calls; concurrent ctx.ai() calls without model bypass that lock and can run while _model is temporarily changed, sending requests with the wrong model.
Agent Prompt
### Issue description
`BaseContext.ai()` temporarily overrides `provider._model` while awaiting `provider.query()`, but only the override path uses a lock. Any concurrent `ctx.ai()` call without `model=...` can observe the overridden `_model` and issue a request with the wrong model.
### Issue Context
- The override path uses `provider._easycord_model_lock`, but the non-override path does not.
- Built-in providers (e.g. `OpenAIProvider`) read `self._model` during `query()`.
### Fix Focus Areas
- easycord/_context_base.py[137-161]
- easycord/plugins/_ai_providers.py[67-103]
### Suggested fix
- Prefer: change provider interface to accept `model` as an argument (no shared state mutation).
- If keeping mutation: acquire the same lock for *all* `provider.query()` calls when the provider has `_model` (and move `old_model = ...` inside the locked region).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| old_model = getattr(provider, "_model", None) | ||
| should_restore = model is not None and hasattr(provider, "_model") | ||
| if not should_restore: | ||
| return await provider.query(prompt) | ||
|
|
||
| lock = getattr(provider, "_easycord_model_lock", None) | ||
| if lock is None: | ||
| lock = asyncio.Lock() | ||
| provider._easycord_model_lock = lock | ||
| async with lock: | ||
| provider._model = model | ||
| try: | ||
| return await provider.query(prompt) | ||
| finally: | ||
| provider._model = old_model |
There was a problem hiding this comment.
4. Model override persists 🐞 Bug ≡ Correctness
ctx.ai(model=...) can permanently change behavior for providers that bind the model when initializing their SDK client (e.g., Gemini/HuggingFace): a one-off override can create a client for the custom model and then restore only provider._model, leaving provider._client configured for the wrong model thereafter.
Agent Prompt
### Issue description
`ctx.ai(..., model=...)` implements a “temporary” model override by mutating `provider._model`, but it does not account for providers that bind model into `provider._client` at initialization time.
For such providers, the override can either:
- Have no effect (if `_client` already exists), or
- Persist beyond the call (if `_client` is created during the override), leaving the provider permanently using the wrong model.
### Issue Context
- `GeminiProvider._init_client()` creates `GenerativeModel(self._model)` once and later queries use `_client.generate_content(...)`.
- `HuggingFaceProvider._init_client()` creates `InferenceClient(model=self._model)` once and later queries use `_client.text_generation(...)`.
### Fix Focus Areas
- easycord/_context_base.py[147-161]
- easycord/plugins/_ai_providers.py[122-141]
- easycord/plugins/_ai_providers.py[274-295]
### Suggested fix
One of:
1) Add a supported `query(prompt, *, model=None)` API so model is per-call.
2) Implement a provider hook like `_set_model_temporarily(model)` that also resets/rebuilds `_client` safely.
3) At minimum, in `ctx.ai` override path: snapshot and restore both `_model` and `_client` (and any other model-bound state) under the same lock.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| retry_after = self._rate_limit_retry_after(ctx) | ||
| if retry_after is not None: | ||
| await ctx.respond( | ||
| ctx.t( | ||
| "ai.rate_limited", | ||
| default="You're asking too quickly. Try again in {seconds:.0f}s.", | ||
| seconds=retry_after, | ||
| ), | ||
| ephemeral=True, | ||
| ) | ||
| return | ||
|
|
||
| if self._thinking_key: | ||
| await ctx.respond( | ||
| ctx.t(self._thinking_key, default="Thinking..."), | ||
| ) | ||
| else: | ||
| await ctx.defer() | ||
|
|
||
| try: | ||
| response_text = await self._provider.query(prompt) | ||
| await ctx.respond(self._format_response(response_text)) | ||
| response = self._format_response(response_text) | ||
| if self._thinking_key: | ||
| await ctx.edit_response(response) | ||
| else: | ||
| await ctx.respond(response) | ||
|
|
||
| except ImportError as exc: | ||
| await ctx.respond( |
There was a problem hiding this comment.
5. Thinking errors not edited 🐞 Bug ≡ Correctness
When AIPlugin.ask() uses a thinking message (OpenClaudePlugin), errors are sent via ctx.respond(ephemeral=True) as follow-ups instead of editing the original thinking response, leaving a stale “Thinking…” message behind.
Agent Prompt
### Issue description
When `_thinking_key` is set, `AIPlugin.ask()` sends an initial thinking response and then uses `ctx.edit_response()` only on success. On errors, it calls `ctx.respond(..., ephemeral=True)` which becomes a follow-up, leaving the original thinking message stale.
### Issue Context
- `BaseContext.respond()` sends follow-ups after the first response.
- With thinking enabled, the first response is the thinking text.
### Fix Focus Areas
- easycord/plugins/openclaude.py[104-135]
- easycord/_context_base.py[101-124]
### Suggested fix
In both `except ImportError` and `except Exception` blocks:
- If `_thinking_key` is set, call `await ctx.edit_response(<error text>)` (and avoid ephemeral follow-up), mirroring the success path.
- Otherwise keep existing `ctx.respond(..., ephemeral=True)` follow-up behavior after `defer()`.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| if: github.event_name == 'workflow_dispatch' || contains(github.event.comment.body, '/fix-issues') | ||
| runs-on: ubuntu-latest | ||
|
|
||
| steps: | ||
| - name: Resolve target branch | ||
| id: target | ||
| uses: actions/github-script@v7 | ||
| with: | ||
| script: | | ||
| if (context.eventName === "workflow_dispatch") { | ||
| core.setOutput("branch", core.getInput("target_branch")); | ||
| core.setOutput("fix_command", core.getInput("fix_command")); | ||
| core.setOutput("commit_message", core.getInput("commit_message")); | ||
| return; | ||
| } | ||
|
|
||
| const issue = context.payload.issue; | ||
| if (!issue.pull_request) { | ||
| core.setFailed("/fix-issues comments are only supported on pull requests."); | ||
| return; | ||
| } | ||
|
|
||
| const { data: pull } = await github.rest.pulls.get({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| pull_number: issue.number, | ||
| }); | ||
|
|
||
| if (pull.head.repo.full_name !== `${context.repo.owner}/${context.repo.repo}`) { | ||
| core.setFailed("Auto-fix can only push to branches in this repository."); | ||
| return; | ||
| } | ||
|
|
||
| core.setOutput("branch", pull.head.ref); | ||
| core.setOutput("fix_command", "python scripts/fix_issues.py"); | ||
| core.setOutput("commit_message", `chore: auto-fix PR #${issue.number} issue triage`); | ||
|
|
||
| - name: Check out target branch | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| ref: ${{ steps.target.outputs.branch }} | ||
| token: ${{ secrets.GITHUB_TOKEN }} | ||
|
|
||
| - name: Set up Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" | ||
|
|
||
| - name: Install project and build tools | ||
| run: python -m pip install --upgrade pip build && python -m pip install -e ".[dev]" | ||
|
|
||
| - name: Apply fixes | ||
| run: ${{ steps.target.outputs.fix_command }} | ||
|
|
||
| - name: Run tests | ||
| run: python -m pytest | ||
|
|
||
| - name: Build package | ||
| run: python -m build | ||
|
|
||
| - name: Commit and push fixes | ||
| shell: bash | ||
| run: | | ||
| if git diff --quiet && git diff --cached --quiet; then | ||
| echo "No fixes produced." | ||
| exit 0 | ||
| fi | ||
|
|
||
| git config user.name "github-actions[bot]" | ||
| git config user.email "41898282+github-actions[bot]@users.noreply.github.com" | ||
| git add -A | ||
| git commit -m "${{ steps.target.outputs.commit_message }}" | ||
| git push origin "HEAD:${{ steps.target.outputs.branch }}" |
There was a problem hiding this comment.
6. Auto-fix workflow injection 🐞 Bug ⛨ Security
The auto-fix workflow can be triggered by any issue commenter via '/fix-issues' (no permission check) and interpolates workflow_dispatch inputs directly into bash (fix_command/commit_message), enabling unwanted runs/pushes and potential command injection in the runner context.
Agent Prompt
### Issue description
The workflow is triggerable by any commenter (`issue_comment`) and performs a checkout + test/build + push with `contents: write`. Separately, `workflow_dispatch` inputs are interpolated directly into bash (`fix_command` and `commit_message`), which can be interpreted as shell syntax.
### Issue Context
- `if:` gate only checks `contains(comment.body, '/fix-issues')`.
- `git commit -m "${{ steps.target.outputs.commit_message }}"` inserts an arbitrary string into the script body.
### Fix Focus Areas
- .github/workflows/auto-fix-issues.yml[18-29]
- .github/workflows/auto-fix-issues.yml[79-81]
- .github/workflows/auto-fix-issues.yml[88-100]
### Suggested fix
- Require trusted actors for comment trigger (e.g., `author_association` in OWNER/MEMBER/COLLABORATOR, or check PR author).
- Harden interpolation:
- Pass inputs through `env:` and reference as `"$COMMIT_MESSAGE"` so bash treats the content as data.
- Consider removing arbitrary `fix_command` input entirely (or restrict to a fixed allowlist).
- Optionally tighten the job `if:` to avoid evaluating comment fields on non-issue_comment events (wrap with an explicit event_name check).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Summary
/fix-issueson in-repo PRs, validates fixes, and pushes generated commits back to the target branchBot(ai_provider=...)andawait ctx.ai(...)for a shared or one-off AI provider entry pointAIPlugin/OpenClaudePluginOpenClaudePluginsend localizedopenclaude.thinkingtext before editing the response with the final answerIssues / PR triage covered
/askrate limiting with test coveragectx.ai()helper with shared provider wiring and focused testsmainalready contains the larger stability/scope docsmain's performance documentation, so this PR does not reapply the conflicting doc branchmaininsteadValidation
pytest(589 passed)python -m compileall easycord examples docs tests scriptspython -m buildSummary by Sourcery
Add a configurable AI helper entry point, rate-limited AI plugins with localized thinking messages, and an automated workflow for mechanically fixing issues and validating changes.
New Features:
ai_provideronBotand actx.ai(...)helper for querying AI providers from commands.Auto-fix Issuesworkflow andscripts/fix_issues.pyhook to apply mechanical fixes, run tests, build the package, and push commits.AIPluginandOpenClaudePluginwith configurable rate limiting and localized thinking message support for/ask.Enhancements:
OpenClaudePluginsend a localizedopenclaude.thinkingmessage before editing it with the final AI response.AIPluginandOpenClaudePlugin, returning localized cooldown messages when limits are hit.CI:
/fix-issuescomments on in-repo pull requests, enforcing tests and build before pushing fixes.Documentation:
ctx.ai(...)helper,Bot(ai_provider=...)configuration, AI plugin rate limiting, and localized OpenClaude thinking behavior in API docs, examples, and README.Tests:
ctx.ai(...)helper behaviors and storage ofai_provideronBot.