Skip to content

Add issue auto-fix workflow and AI helper fixes#29

Merged
rolling-codes merged 4 commits intomainfrom
codex/triage-issues-prs
Apr 28, 2026
Merged

Add issue auto-fix workflow and AI helper fixes#29
rolling-codes merged 4 commits intomainfrom
codex/triage-issues-prs

Conversation

@rolling-codes
Copy link
Copy Markdown
Owner

@rolling-codes rolling-codes commented Apr 28, 2026

Summary

  • add an Auto-fix Issues workflow that can be manually dispatched or triggered with /fix-issues on in-repo PRs, validates fixes, and pushes generated commits back to the target branch
  • add Bot(ai_provider=...) and await ctx.ai(...) for a shared or one-off AI provider entry point
  • add per-user, per-guild rate limiting to AIPlugin / OpenClaudePlugin
  • make OpenClaudePlugin send localized openclaude.thinking text before editing the response with the final answer
  • document the new AI helper, rate limiting, and localized thinking behavior

Issues / PR triage covered

Validation

  • pytest (589 passed)
  • python -m compileall easycord examples docs tests scripts
  • python -m build

Summary by Sourcery

Add a configurable AI helper entry point, rate-limited AI plugins with localized thinking messages, and an automated workflow for mechanically fixing issues and validating changes.

New Features:

  • Introduce a shared ai_provider on Bot and a ctx.ai(...) helper for querying AI providers from commands.
  • Add a GitHub Actions Auto-fix Issues workflow and scripts/fix_issues.py hook to apply mechanical fixes, run tests, build the package, and push commits.
  • Extend AIPlugin and OpenClaudePlugin with configurable rate limiting and localized thinking message support for /ask.

Enhancements:

  • Make OpenClaudePlugin send a localized openclaude.thinking message before editing it with the final AI response.
  • Add per-user, per-guild rate limiting to AIPlugin and OpenClaudePlugin, returning localized cooldown messages when limits are hit.

CI:

  • Add an auto-fix workflow that can be dispatch-triggered or invoked via /fix-issues comments on in-repo pull requests, enforcing tests and build before pushing fixes.

Documentation:

  • Document the new ctx.ai(...) helper, Bot(ai_provider=...) configuration, AI plugin rate limiting, and localized OpenClaude thinking behavior in API docs, examples, and README.

Tests:

  • Add tests for the new ctx.ai(...) helper behaviors and storage of ai_provider on Bot.
  • Extend AI plugin tests to cover per-user rate limiting, localized thinking messages, and required context identifiers for rate tracking.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Apr 28, 2026

Reviewer's Guide

Adds a GitHub Actions "Auto-fix Issues" workflow and hook script, introduces a shared AI provider entry point via Bot(ai_provider=...) and ctx.ai(...), implements per-guild/per-user rate limiting and localized thinking messages for AIPlugin/OpenClaudePlugin, and documents/tests the new behavior.

Sequence diagram for ctx.ai helper with optional shared provider and model override

sequenceDiagram
    actor User
    participant Discord
    participant Bot
    participant InteractionContext as Context
    participant Provider as AIProvider

    User->>Discord: Invoke slash command
    Discord->>Bot: Interaction payload
    Bot->>InteractionContext: Create context

    User->>InteractionContext: await ctx.ai(prompt, provider=None, model)
    InteractionContext->>InteractionContext: Resolve provider = provider or Bot.ai_provider
    InteractionContext->>InteractionContext: Check provider is not None

    alt model is None or provider has no _model
        InteractionContext->>Provider: query(prompt)
        Provider-->>InteractionContext: response_text
    else model override
        InteractionContext->>Provider: get _easycord_model_lock or create
        InteractionContext->>Provider: acquire lock
        InteractionContext->>Provider: set _model = model
        InteractionContext->>Provider: query(prompt)
        Provider-->>InteractionContext: response_text
        InteractionContext->>Provider: restore _model
        InteractionContext->>Provider: release lock
    end

    InteractionContext-->>User: response_text
Loading

Sequence diagram for AIPlugin.ask with rate limiting and localized thinking

sequenceDiagram
    actor User
    participant Discord
    participant AIPlugin
    participant Provider as AIProvider
    participant Ctx as Context

    User->>Discord: /ask prompt
    Discord->>AIPlugin: invoke ask(ctx, prompt)

    AIPlugin->>AIPlugin: retry_after = _rate_limit_retry_after(ctx)
    alt rate limited
        AIPlugin->>Ctx: respond(t("ai.rate_limited"), ephemeral=True)
        AIPlugin-->>Discord: return
    else allowed
        alt thinking_key configured
            AIPlugin->>Ctx: respond(t(thinking_key, default="Thinking..."))
        else no thinking_key
            AIPlugin->>Ctx: defer()
        end

        AIPlugin->>Provider: query(prompt)
        Provider-->>AIPlugin: response_text
        AIPlugin->>AIPlugin: response = _format_response(response_text)

        alt thinking_key configured
            AIPlugin->>Ctx: edit_response(response)
        else no thinking_key
            AIPlugin->>Ctx: respond(response)
        end
    end
Loading

Updated class diagram for Bot, context AI helper, and AI plugins

classDiagram
    class AIProvider {
        <<interface>>
        +query(prompt: str) str
        +_model: str
        +_easycord_model_lock
    }

    class Bot {
        +ai_provider: AIProvider
        +__init__(intents, auto_sync, load_builtin_plugins, database, db_backend, db_path, db_auto_sync_guilds, ai_provider, kwargs)
    }

    class InteractionContextBase {
        +interaction
        +defer(ephemeral: bool)
        +respond(content)
        +edit_response(content)
        +t(key, default, seconds)
        +ai(prompt: str, provider, model: str) str
    }

    class AIPlugin {
        -_provider: AIProvider
        -_rate_limit: int
        -_rate_window: float
        -_thinking_key: str
        -_requests: dict~tuple~int,int~~, list~float~~
        +__init__(provider: AIProvider, rate_limit: int, rate_window: float, thinking_key: str)
        +_rate_limit_retry_after(ctx) float
        +ask(ctx, prompt: str) void
        +_format_response(text: str) str
    }

    class OpenClaudePlugin {
        +__init__(api_key: str, model: str, rate_limit: int, rate_window: float)
    }

    Bot o--> AIProvider : ai_provider
    InteractionContextBase --> Bot : interaction.client
    AIPlugin --> AIProvider : uses
    OpenClaudePlugin --|> AIPlugin
Loading

File-Level Changes

Change Details Files
Add reusable AI helper on context with Bot-configured provider support.
  • Add Bot(ai_provider=...) storage on the Bot class for a shared AI provider
  • Introduce ContextBase.ai(...) helper to call either the shared or a one-off provider and optionally override the provider model in a lock-safe way
  • Add tests verifying configured provider usage, one-off providers, temporary model overrides, and error behavior when no provider is configured
  • Document ctx.ai(...) usage in API, examples, and README
easycord/bot.py
easycord/_context_base.py
tests/test_bot.py
tests/test_context.py
docs/api.md
docs/examples.md
README.md
Implement per-user, per-guild rate limiting and localized thinking behavior for AIPlugin/OpenClaudePlugin.
  • Extend AIPlugin to accept rate_limit, rate_window, and thinking_key options and track requests in an in-memory timestamp map keyed by (guild_id, user_id)
  • Add _rate_limit_retry_after helper that returns a cooldown when the rate limit is exceeded
  • Update ask(...) to short-circuit with a localized ai.rate_limited message (ephemeral) when over the limit
  • Change ask(...) to either send a localized thinking message and later edit it, or fall back to defer/respond behavior when no thinking_key is set
  • Wire OpenClaudePlugin through AIPlugin with default rate limiting and thinking_key='openclaude.thinking'
  • Add/adjust tests to assert rate limiting semantics, per-user separation, and the new OpenClaude thinking/edit behavior
easycord/plugins/openclaude.py
tests/test_openclaude_plugin.py
docs/api.md
docs/examples.md
README.md
Add Auto-fix Issues CI workflow and repository hook script.
  • Create auto-fix-issues GitHub Actions workflow that can be manually dispatched or triggered via /fix-issues comments on in-repo PRs
  • Workflow resolves the target branch, checks it out, installs project in dev mode, runs a configurable fix command, then validates via pytest and python -m build, and pushes any resulting commits back to the branch
  • Introduce scripts/fix_issues.py as a deterministic placeholder hook for repository-local mechanical fixes
.github/workflows/auto-fix-issues.yml
scripts/fix_issues.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add AI helper, rate limiting, and auto-fix workflow

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add ctx.ai(...) helper for querying AI providers with optional model override
• Implement per-user, per-guild rate limiting for AIPlugin and OpenClaudePlugin
• Make OpenClaudePlugin show localized thinking message before editing response
• Add Auto-fix Issues GitHub workflow for mechanical repository fixes
• Extend Bot to accept and store ai_provider configuration parameter
Diagram
flowchart LR
  Bot["Bot(ai_provider=...)"]
  CtxAI["ctx.ai(prompt, model=...)"]
  AIPlugin["AIPlugin(rate_limit, rate_window)"]
  OpenClaude["OpenClaudePlugin(thinking_key)"]
  Workflow["Auto-fix Issues Workflow"]
  
  Bot -- "stores provider" --> CtxAI
  CtxAI -- "queries" --> AIPlugin
  AIPlugin -- "enforces limits" --> OpenClaude
  Workflow -- "runs fixes" --> Bot
Loading

Grey Divider

File Changes

1. easycord/_context_base.py ✨ Enhancement +28/-0

Add ctx.ai() method for AI provider queries

easycord/_context_base.py


2. easycord/bot.py ✨ Enhancement +2/-0

Store ai_provider parameter in Bot initialization

easycord/bot.py


3. easycord/plugins/openclaude.py ✨ Enhancement +64/-4

Add rate limiting and localized thinking message

easycord/plugins/openclaude.py


View more (8)
4. scripts/fix_issues.py ⚙️ Configuration changes +14/-0

Add repository hook for mechanical issue fixes

scripts/fix_issues.py


5. tests/test_bot.py 🧪 Tests +9/-0

Test Bot ai_provider storage

tests/test_bot.py


6. tests/test_context.py 🧪 Tests +32/-0

Test ctx.ai() provider resolution and model override

tests/test_context.py


7. tests/test_openclaude_plugin.py 🧪 Tests +66/-3

Test rate limiting and localized thinking behavior

tests/test_openclaude_plugin.py


8. .github/workflows/auto-fix-issues.yml ⚙️ Configuration changes +100/-0

Add workflow for automated issue triage fixes

.github/workflows/auto-fix-issues.yml


9. README.md 📝 Documentation +14/-1

Document ctx.ai() helper and rate limiting

README.md


10. docs/api.md 📝 Documentation +28/-1

Document ctx.ai() and AIPlugin rate limiting parameters

docs/api.md


11. docs/examples.md 📝 Documentation +16/-1

Add ctx.ai() usage example and update descriptions

docs/examples.md


Grey Divider

Qodo Logo

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

Warning

Rate limit exceeded

@rolling-codes has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 53 minutes and 54 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 007fef47-a57e-4ccf-a74f-fb7de818f26b

📥 Commits

Reviewing files that changed from the base of the PR and between f5eeac3 and 1538cbb.

📒 Files selected for processing (1)
  • easycord/plugins/levels.py
📝 Walkthrough

Walkthrough

This pull request introduces AI querying capabilities to the framework. It adds a new Context.ai() method for direct LLM interaction, configures AI providers via Bot, implements per-user rate limiting and UX enhancements for the Claude plugin, and includes supporting infrastructure (GitHub Actions workflow, documentation, and comprehensive tests).

Changes

Cohort / File(s) Summary
GitHub Actions & Scripts
.github/workflows/auto-fix-issues.yml, scripts/fix_issues.py
New GitHub Actions workflow and script for auto-applying repository fixes, validating with pytest, building packages, and pushing commits. Supports manual dispatch and PR comment triggers.
Core AI Integration
easycord/bot.py, easycord/_context_base.py
New ai_provider parameter on Bot constructor and new async Context.ai(prompt, *, provider=None, model=None) method that queries providers with optional per-call model overrides and locks for thread-safe model switching.
AI Plugins
easycord/plugins/openclaude.py
Extended AIPlugin and OpenClaudePlugin with per-user-per-guild rate limiting, configurable rate windows, and optional "thinking" UX that immediately responds with localized thinking message and edits with final response.
Documentation
README.md, docs/api.md, docs/examples.md
Updated API documentation for Bot and Context AI methods, expanded Claude /ask command documentation with rate limiting and thinking message details, and added examples of direct AI querying via ctx.ai().
Tests
tests/test_bot.py, tests/test_context.py, tests/test_openclaude_plugin.py
New tests validating ai_provider persistence, Context.ai() provider resolution and model overrides, error handling when provider is absent, rate limiting behavior per user, and thinking UX flow with ctx.t() and ctx.edit_response().

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Context as Context.ai()
    participant Bot as Bot Instance
    participant AIProvider as AI Provider
    participant RateLimit as Rate Limiter

    User->>Context: Call ctx.ai(prompt, model=...)
    Context->>Bot: Resolve ai_provider
    Context->>RateLimit: Check rate limit (user/guild)
    alt Rate Limited
        RateLimit-->>Context: Return cooldown retry time
        Context->>User: Respond ephemeral with ai.rate_limited
    else Within Limit
        RateLimit->>RateLimit: Update request count
        alt Model Override Provided
            Context->>AIProvider: Acquire lock, set _model
        end
        Context->>AIProvider: Call provider.query(prompt)
        AIProvider-->>Context: Return response string
        alt Model Override Provided
            Context->>AIProvider: Release lock, restore original _model
        end
        Context->>User: Respond with query result
    end
Loading
sequenceDiagram
    participant User
    participant SlashCmd as /ask Command
    participant Context as Context Instance
    participant Localization as Localization (t)
    participant AIProvider as Claude Provider
    
    User->>SlashCmd: Invoke /ask with prompt
    SlashCmd->>Context: Check rate limit
    alt Rate Limited
        Context-->>User: Ephemeral ai.rate_limited message
    else Within Limit
        SlashCmd->>Context: Call ctx.t("openclaude.thinking")
        Context->>Localization: Fetch localized thinking message
        Localization-->>Context: Return thinking_text
        SlashCmd->>User: Respond with thinking_text (ephemeral/local)
        SlashCmd->>AIProvider: Query provider with prompt
        AIProvider-->>SlashCmd: Return response
        SlashCmd->>User: Edit response with formatted final result
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Poem

🐰 A hop and a bound through code so neat,
AI now graces the discord beat—
Rate limits hop, thoughts align,
Context queries, responses shine!
With plugins and bots, we reach the sky,
Our whisker'd framework flies up high! 🚀

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main changes: adding an issue auto-fix workflow and AI helper enhancements including rate limiting and thinking messages.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, covering all major features (auto-fix workflow, AI provider integration, rate limiting, localized thinking) with issue tracking and validation details.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/triage-issues-prs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Apr 28, 2026

Code Review by Qodo

🐞 Bugs (4) 📘 Rule violations (1) 📎 Requirement gaps (1)

Context used

Grey Divider


Action required

1. ctx.ai() uncaught provider errors 📎 Requirement gap ☼ Reliability
Description
ctx.ai() directly awaits provider.query() and can raise exceptions (including missing provider)
that bubble up and may crash the command instead of surfacing a user-safe error. It also provides no
integration point for consistent rate limiting, so custom commands using ctx.ai() can bypass the
shared AI rate limiting behavior.
Code

easycord/_context_base.py[R137-161]

+    async def ai(self, prompt: str, *, provider=None, model: str | None = None) -> str:
+        """Query the configured AI provider and return response text.
+
+        Pass ``provider=...`` for one-off calls, or configure ``Bot(ai_provider=...)``
+        so commands can call ``await ctx.ai("...")`` directly.
+        """
+        provider = provider or getattr(self.interaction.client, "ai_provider", None)
+        if provider is None:
+            raise RuntimeError("No AI provider configured. Pass provider=... or set Bot(ai_provider=...).")
+
+        old_model = getattr(provider, "_model", None)
+        should_restore = model is not None and hasattr(provider, "_model")
+        if not should_restore:
+            return await provider.query(prompt)
+
+        lock = getattr(provider, "_easycord_model_lock", None)
+        if lock is None:
+            lock = asyncio.Lock()
+            provider._easycord_model_lock = lock
+        async with lock:
+            provider._model = model
+            try:
+                return await provider.query(prompt)
+            finally:
+                provider._model = old_model
Evidence
PR Compliance ID 1 requires ctx.ai() to include automatic error handling and an integration point
for rate limiting; the added ctx.ai() implementation raises on missing provider and does not wrap
provider.query() in any user-safe error handling or rate limiting mechanism.

Provide ctx.ai() helper for unified AI integration across plugins
easycord/_context_base.py[137-161]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`ctx.ai()` can raise unhandled exceptions from missing provider configuration and from `provider.query()`, and it has no shared rate limiting integration point.

## Issue Context
Compliance requires `ctx.ai()` to be a safe, unified entry point with built-in error handling and a consistent place to enforce/govern rate limits.

## Fix Focus Areas
- easycord/_context_base.py[137-161]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Docs show api_key= literals 📘 Rule violation ⛨ Security
Description
Documentation examples add inline api_key="sk-..." values, which violates the requirement to avoid
hardcoded secrets and to use environment variables/secure configuration for credentials. Even as
placeholders, these examples encourage embedding API keys in code.
Code

README.md[R109-116]

+from easycord.plugins import OpenAIProvider
+
+bot = Bot(ai_provider=OpenAIProvider(api_key="sk-..."))
+
+@bot.slash(description="Ask AI")
+async def ask(ctx, prompt: str):
+    response = await ctx.ai(prompt, model="gpt-4o")
+    await ctx.respond(response[:2000])
Evidence
PR Compliance ID 6 forbids committing credentials in source/docs and requires environment variables
or secure runtime configuration; the newly added docs include inline api_key="sk-..." examples.

AGENTS.md
README.md[109-116]
docs/api.md[662-673]
docs/examples.md[84-95]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Docs include inline `api_key="sk-..."` examples, which conflicts with the no-hardcoded-secrets requirement.

## Issue Context
Examples should demonstrate reading API keys from environment variables (e.g., `os.getenv(...)`) or documented secure configuration patterns.

## Fix Focus Areas
- README.md[109-116]
- docs/api.md[662-673]
- docs/examples.md[84-95]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. ctx.ai model race 🐞 Bug ≡ Correctness
Description
BaseContext.ai() mutates provider._model under a lock only for override calls; concurrent ctx.ai()
calls without model bypass that lock and can run while _model is temporarily changed, sending
requests with the wrong model.
Code

easycord/_context_base.py[R137-161]

+    async def ai(self, prompt: str, *, provider=None, model: str | None = None) -> str:
+        """Query the configured AI provider and return response text.
+
+        Pass ``provider=...`` for one-off calls, or configure ``Bot(ai_provider=...)``
+        so commands can call ``await ctx.ai("...")`` directly.
+        """
+        provider = provider or getattr(self.interaction.client, "ai_provider", None)
+        if provider is None:
+            raise RuntimeError("No AI provider configured. Pass provider=... or set Bot(ai_provider=...).")
+
+        old_model = getattr(provider, "_model", None)
+        should_restore = model is not None and hasattr(provider, "_model")
+        if not should_restore:
+            return await provider.query(prompt)
+
+        lock = getattr(provider, "_easycord_model_lock", None)
+        if lock is None:
+            lock = asyncio.Lock()
+            provider._easycord_model_lock = lock
+        async with lock:
+            provider._model = model
+            try:
+                return await provider.query(prompt)
+            finally:
+                provider._model = old_model
Evidence
ctx.ai() sets provider._model=model and awaits provider.query() inside a lock, but the non-override
path calls provider.query() without acquiring any lock, so it can interleave and observe the
temporarily overridden provider._model. Built-in providers (e.g., OpenAIProvider) read self._model
at query time, so the wrong model can be used during that window.

easycord/_context_base.py[137-161]
easycord/plugins/_ai_providers.py[67-103]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`BaseContext.ai()` temporarily overrides `provider._model` while awaiting `provider.query()`, but only the override path uses a lock. Any concurrent `ctx.ai()` call without `model=...` can observe the overridden `_model` and issue a request with the wrong model.

### Issue Context
- The override path uses `provider._easycord_model_lock`, but the non-override path does not.
- Built-in providers (e.g. `OpenAIProvider`) read `self._model` during `query()`.

### Fix Focus Areas
- easycord/_context_base.py[137-161]
- easycord/plugins/_ai_providers.py[67-103]

### Suggested fix
- Prefer: change provider interface to accept `model` as an argument (no shared state mutation).
- If keeping mutation: acquire the same lock for *all* `provider.query()` calls when the provider has `_model` (and move `old_model = ...` inside the locked region).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
4. Model override persists 🐞 Bug ≡ Correctness
Description
ctx.ai(model=...) can permanently change behavior for providers that bind the model when
initializing their SDK client (e.g., Gemini/HuggingFace): a one-off override can create a client for
the custom model and then restore only provider._model, leaving provider._client configured for the
wrong model thereafter.
Code

easycord/_context_base.py[R147-161]

+        old_model = getattr(provider, "_model", None)
+        should_restore = model is not None and hasattr(provider, "_model")
+        if not should_restore:
+            return await provider.query(prompt)
+
+        lock = getattr(provider, "_easycord_model_lock", None)
+        if lock is None:
+            lock = asyncio.Lock()
+            provider._easycord_model_lock = lock
+        async with lock:
+            provider._model = model
+            try:
+                return await provider.query(prompt)
+            finally:
+                provider._model = old_model
Evidence
ctx.ai() restores only provider._model after the call. Providers like GeminiProvider and
HuggingFaceProvider construct _client using self._model only when _client is None, and
subsequent queries use the cached _client without consulting _model. If the first query happens
under a temporary override, _client is created for the overridden model and persists after
_model is restored.

easycord/_context_base.py[147-161]
easycord/plugins/_ai_providers.py[122-141]
easycord/plugins/_ai_providers.py[274-295]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ctx.ai(..., model=...)` implements a “temporary” model override by mutating `provider._model`, but it does not account for providers that bind model into `provider._client` at initialization time.

For such providers, the override can either:
- Have no effect (if `_client` already exists), or
- Persist beyond the call (if `_client` is created during the override), leaving the provider permanently using the wrong model.

### Issue Context
- `GeminiProvider._init_client()` creates `GenerativeModel(self._model)` once and later queries use `_client.generate_content(...)`.
- `HuggingFaceProvider._init_client()` creates `InferenceClient(model=self._model)` once and later queries use `_client.text_generation(...)`.

### Fix Focus Areas
- easycord/_context_base.py[147-161]
- easycord/plugins/_ai_providers.py[122-141]
- easycord/plugins/_ai_providers.py[274-295]

### Suggested fix
One of:
1) Add a supported `query(prompt, *, model=None)` API so model is per-call.
2) Implement a provider hook like `_set_model_temporarily(model)` that also resets/rebuilds `_client` safely.
3) At minimum, in `ctx.ai` override path: snapshot and restore both `_model` and `_client` (and any other model-bound state) under the same lock.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Thinking errors not edited 🐞 Bug ≡ Correctness
Description
When AIPlugin.ask() uses a thinking message (OpenClaudePlugin), errors are sent via
ctx.respond(ephemeral=True) as follow-ups instead of editing the original thinking response, leaving
a stale “Thinking…” message behind.
Code

easycord/plugins/openclaude.py[R92-120]

+        retry_after = self._rate_limit_retry_after(ctx)
+        if retry_after is not None:
+            await ctx.respond(
+                ctx.t(
+                    "ai.rate_limited",
+                    default="You're asking too quickly. Try again in {seconds:.0f}s.",
+                    seconds=retry_after,
+                ),
+                ephemeral=True,
+            )
+            return
+
+        if self._thinking_key:
+            await ctx.respond(
+                ctx.t(self._thinking_key, default="Thinking..."),
+            )
+        else:
+            await ctx.defer()

        try:
            response_text = await self._provider.query(prompt)
-            await ctx.respond(self._format_response(response_text))
+            response = self._format_response(response_text)
+            if self._thinking_key:
+                await ctx.edit_response(response)
+            else:
+                await ctx.respond(response)

        except ImportError as exc:
            await ctx.respond(
Evidence
With thinking enabled, AIPlugin.ask() calls ctx.respond() before entering the try/except.
BaseContext.respond() then treats subsequent ctx.respond() calls as follow-ups, so the exception
handlers send a separate ephemeral message and never update the original thinking message.

easycord/plugins/openclaude.py[92-135]
easycord/_context_base.py[101-124]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
When `_thinking_key` is set, `AIPlugin.ask()` sends an initial thinking response and then uses `ctx.edit_response()` only on success. On errors, it calls `ctx.respond(..., ephemeral=True)` which becomes a follow-up, leaving the original thinking message stale.

### Issue Context
- `BaseContext.respond()` sends follow-ups after the first response.
- With thinking enabled, the first response is the thinking text.

### Fix Focus Areas
- easycord/plugins/openclaude.py[104-135]
- easycord/_context_base.py[101-124]

### Suggested fix
In both `except ImportError` and `except Exception` blocks:
- If `_thinking_key` is set, call `await ctx.edit_response(<error text>)` (and avoid ephemeral follow-up), mirroring the success path.
- Otherwise keep existing `ctx.respond(..., ephemeral=True)` follow-up behavior after `defer()`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Auto-fix workflow injection 🐞 Bug ⛨ Security
Description
The auto-fix workflow can be triggered by any issue commenter via '/fix-issues' (no permission
check) and interpolates workflow_dispatch inputs directly into bash (fix_command/commit_message),
enabling unwanted runs/pushes and potential command injection in the runner context.
Code

.github/workflows/auto-fix-issues.yml[R28-100]

+    if: github.event_name == 'workflow_dispatch' || contains(github.event.comment.body, '/fix-issues')
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: Resolve target branch
+        id: target
+        uses: actions/github-script@v7
+        with:
+          script: |
+            if (context.eventName === "workflow_dispatch") {
+              core.setOutput("branch", core.getInput("target_branch"));
+              core.setOutput("fix_command", core.getInput("fix_command"));
+              core.setOutput("commit_message", core.getInput("commit_message"));
+              return;
+            }
+
+            const issue = context.payload.issue;
+            if (!issue.pull_request) {
+              core.setFailed("/fix-issues comments are only supported on pull requests.");
+              return;
+            }
+
+            const { data: pull } = await github.rest.pulls.get({
+              owner: context.repo.owner,
+              repo: context.repo.repo,
+              pull_number: issue.number,
+            });
+
+            if (pull.head.repo.full_name !== `${context.repo.owner}/${context.repo.repo}`) {
+              core.setFailed("Auto-fix can only push to branches in this repository.");
+              return;
+            }
+
+            core.setOutput("branch", pull.head.ref);
+            core.setOutput("fix_command", "python scripts/fix_issues.py");
+            core.setOutput("commit_message", `chore: auto-fix PR #${issue.number} issue triage`);
+
+      - name: Check out target branch
+        uses: actions/checkout@v4
+        with:
+          ref: ${{ steps.target.outputs.branch }}
+          token: ${{ secrets.GITHUB_TOKEN }}
+
+      - name: Set up Python
+        uses: actions/setup-python@v5
+        with:
+          python-version: "3.11"
+
+      - name: Install project and build tools
+        run: python -m pip install --upgrade pip build && python -m pip install -e ".[dev]"
+
+      - name: Apply fixes
+        run: ${{ steps.target.outputs.fix_command }}
+
+      - name: Run tests
+        run: python -m pytest
+
+      - name: Build package
+        run: python -m build
+
+      - name: Commit and push fixes
+        shell: bash
+        run: |
+          if git diff --quiet && git diff --cached --quiet; then
+            echo "No fixes produced."
+            exit 0
+          fi
+
+          git config user.name "github-actions[bot]"
+          git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
+          git add -A
+          git commit -m "${{ steps.target.outputs.commit_message }}"
+          git push origin "HEAD:${{ steps.target.outputs.branch }}"
Evidence
The job condition only checks for the substring in the comment body and then checks out a branch and
runs commands with contents: write. Additionally, the commit message is inserted directly into the
bash script via Actions expression interpolation, so workflow_dispatch users can supply strings that
bash will interpret (e.g., $(...)).

.github/workflows/auto-fix-issues.yml[18-29]
.github/workflows/auto-fix-issues.yml[79-81]
.github/workflows/auto-fix-issues.yml[88-100]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The workflow is triggerable by any commenter (`issue_comment`) and performs a checkout + test/build + push with `contents: write`. Separately, `workflow_dispatch` inputs are interpolated directly into bash (`fix_command` and `commit_message`), which can be interpreted as shell syntax.

### Issue Context
- `if:` gate only checks `contains(comment.body, '/fix-issues')`.
- `git commit -m "${{ steps.target.outputs.commit_message }}"` inserts an arbitrary string into the script body.

### Fix Focus Areas
- .github/workflows/auto-fix-issues.yml[18-29]
- .github/workflows/auto-fix-issues.yml[79-81]
- .github/workflows/auto-fix-issues.yml[88-100]

### Suggested fix
- Require trusted actors for comment trigger (e.g., `author_association` in OWNER/MEMBER/COLLABORATOR, or check PR author).
- Harden interpolation:
 - Pass inputs through `env:` and reference as `"$COMMIT_MESSAGE"` so bash treats the content as data.
 - Consider removing arbitrary `fix_command` input entirely (or restrict to a fixed allowlist).
- Optionally tighten the job `if:` to avoid evaluating comment fields on non-issue_comment events (wrap with an explicit event_name check).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • The in-memory rate limit store _requests in AIPlugin only ever shrinks per-key lists but never removes idle keys, so over a long-lived bot this can grow without bound for many guild/user combinations; consider pruning old keys or using a bounded/expiring structure.
  • Context.ai temporarily mutates provider._model on a shared provider instance, which can interfere with other concurrent code paths that use the same provider; if possible, prefer passing a model parameter through to query or using a provider clone/wrapper instead of mutating shared state.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The in-memory rate limit store `_requests` in `AIPlugin` only ever shrinks per-key lists but never removes idle keys, so over a long-lived bot this can grow without bound for many guild/user combinations; consider pruning old keys or using a bounded/expiring structure.
- `Context.ai` temporarily mutates `provider._model` on a shared provider instance, which can interfere with other concurrent code paths that use the same provider; if possible, prefer passing a model parameter through to `query` or using a provider clone/wrapper instead of mutating shared state.

## Individual Comments

### Comment 1
<location path="easycord/plugins/openclaude.py" line_range="104-113" />
<code_context>
+            )
+            return
+
+        if self._thinking_key:
+            await ctx.respond(
+                ctx.t(self._thinking_key, default="Thinking..."),
+            )
+        else:
+            await ctx.defer()

         try:
             response_text = await self._provider.query(prompt)
-            await ctx.respond(self._format_response(response_text))
+            response = self._format_response(response_text)
+            if self._thinking_key:
+                await ctx.edit_response(response)
+            else:
+                await ctx.respond(response)

         except ImportError as exc:
             await ctx.respond(
</code_context>
<issue_to_address>
**issue (bug_risk):** Mixing `respond` and `edit_response` based on `thinking_key` can break error handling after an initial response.

In the `self._thinking_key` case, the initial reply uses `ctx.respond(...)`, and the success path correctly switches to `ctx.edit_response(...)`. But the exception handlers (`ImportError`, `ValueError`, generic `Exception`) still use `ctx.respond(...)`, which can fail once the interaction has already been responded to. These handlers should use `edit_response` (or `followup.send`) when `self._thinking_key` is set, matching the success path behavior.
</issue_to_address>

### Comment 2
<location path=".github/workflows/auto-fix-issues.yml" line_range="32-40" />
<code_context>
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: Resolve target branch
+        id: target
+        uses: actions/github-script@v7
+        with:
+          script: |
+            if (context.eventName === "workflow_dispatch") {
+              core.setOutput("branch", core.getInput("target_branch"));
+              core.setOutput("fix_command", core.getInput("fix_command"));
+              core.setOutput("commit_message", core.getInput("commit_message"));
+              return;
+            }
</code_context>
<issue_to_address>
**issue (bug_risk):** Reading workflow_dispatch inputs via `core.getInput` inside `github-script` is likely incorrect.

In the `workflow_dispatch` branch, this script calls `core.getInput` for `target_branch`, `fix_command`, and `commit_message`, but those are workflow-level inputs, not inputs to this `github-script` step. In `github-script` you should read them from `context.payload.inputs` (e.g. `context.payload.inputs.target_branch`) or pass them into the step via `with:`/`env:`. As written, these values will be empty, so the resolved outputs will be blank and later steps (checkout, fix command) are likely to fail.
</issue_to_address>

### Comment 3
<location path="tests/test_openclaude_plugin.py" line_range="476-485" />
<code_context>
+    ctx.edit_response.assert_called_once_with("Test response")
+
+
+@pytest.mark.asyncio
+async def test_aiplugin_rate_limits_per_user():
+    """AIPlugin.ask rate limits repeated requests per guild/user."""
+    provider = MagicMock(spec=AIProvider)
+    provider.query = AsyncMock(return_value="AI response")
+    plugin = AIPlugin(provider=provider, rate_limit=1, rate_window=60)
+    ctx = MagicMock()
+    ctx.defer = AsyncMock()
+    ctx.respond = AsyncMock()
+    ctx.t = MagicMock(return_value="Slow down")
+    ctx.user.id = 123
+    ctx.guild_id = 456
+
+    await plugin.ask(ctx, prompt="first")
+    await plugin.ask(ctx, prompt="second")
+
+    assert provider.query.await_count == 1
+    ctx.t.assert_called_with(
+        "ai.rate_limited",
+        default="You're asking too quickly. Try again in {seconds:.0f}s.",
+        seconds=pytest.approx(60, abs=1),
+    )
+    assert ctx.respond.call_args_list[-1].kwargs["ephemeral"] is True
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test to cover rate-limit window expiry behavior

This test only asserts that a second request within the window is blocked. It doesn’t verify that requests are allowed again after the window elapses. Because `_rate_limit_retry_after` depends on `time.monotonic()` and `rate_window`, a regression there could go unnoticed. Please add a test that controls `time.monotonic()` (or uses a fake clock) to:

1. Allow initial `rate_limit` calls.
2. Advance time beyond `rate_window`.
3. Assert that a subsequent call is allowed (i.e., `provider.query` is called again and no rate-limit message is sent).

Suggested implementation:

```python
    ctx.user.id = 123
    ctx.guild_id = 456

    await plugin.ask(ctx, prompt="test")



    await plugin.ask(ctx, prompt="test")

    ctx = MagicMock()
    ctx.defer = AsyncMock()
    ctx.respond = AsyncMock()
    ctx.user.id = 123
    ctx.guild_id = 456

    await plugin.ask(ctx, prompt="test")


@pytest.mark.asyncio
async def test_aiplugin_rate_limits_reset_after_window(monkeypatch):
    """AIPlugin.ask allows requests again after the rate-limit window expires."""
    provider = MagicMock(spec=AIProvider)
    provider.query = AsyncMock(return_value="AI response")
    plugin = AIPlugin(provider=provider, rate_limit=1, rate_window=60)

    # Control time.monotonic so we can advance time in the test
    fake_time = 1000.0

    def fake_monotonic() -> float:
        return fake_time

    # NOTE: adjust the target string below if AIPlugin's module path differs.
    monkeypatch.setattr("openclaude.plugin.time.monotonic", fake_monotonic)

    ctx = MagicMock()
    ctx.defer = AsyncMock()
    ctx.respond = AsyncMock()
    ctx.t = MagicMock()
    ctx.user.id = 123
    ctx.guild_id = 456

    # First call should be allowed
    await plugin.ask(ctx, prompt="first")
    assert provider.query.await_count == 1
    ctx.t.assert_not_called()

    # Second call within the window should be rate-limited
    await plugin.ask(ctx, prompt="second")
    assert provider.query.await_count == 1
    ctx.t.assert_called_with(
        "ai.rate_limited",
        default="You're asking too quickly. Try again in {seconds:.0f}s.",
        seconds=pytest.approx(60, abs=1),
    )

    # Advance time beyond the rate window
    fake_time += 61

    # After the window, a new call should be allowed again (no rate-limit message)
    ctx.t.reset_mock()
    await plugin.ask(ctx, prompt="third")
    assert provider.query.await_count == 2
    ctx.t.assert_not_called()

```

1. The `monkeypatch.setattr` target `"openclaude.plugin.time.monotonic"` assumes that:
   - The AIPlugin implementation lives in `openclaude/plugin.py`, and
   - It calls `time.monotonic` via an imported `time` module.
   If instead the code does `from time import monotonic`, change the target to `"openclaude.plugin.monotonic"`, and if the module path differs, update `"openclaude.plugin"` accordingly.
2. This test relies on `pytest`, `MagicMock`, `AsyncMock`, `AIProvider`, and `AIPlugin` already being imported in `tests/test_openclaude_plugin.py` as in the surrounding tests. If any of these are missing, add the appropriate imports at the top of the file.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +104 to +113
if self._thinking_key:
await ctx.respond(
ctx.t(self._thinking_key, default="Thinking..."),
)
else:
await ctx.defer()

try:
response_text = await self._provider.query(prompt)
await ctx.respond(self._format_response(response_text))
response = self._format_response(response_text)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Mixing respond and edit_response based on thinking_key can break error handling after an initial response.

In the self._thinking_key case, the initial reply uses ctx.respond(...), and the success path correctly switches to ctx.edit_response(...). But the exception handlers (ImportError, ValueError, generic Exception) still use ctx.respond(...), which can fail once the interaction has already been responded to. These handlers should use edit_response (or followup.send) when self._thinking_key is set, matching the success path behavior.

Comment on lines +32 to +40
- name: Resolve target branch
id: target
uses: actions/github-script@v7
with:
script: |
if (context.eventName === "workflow_dispatch") {
core.setOutput("branch", core.getInput("target_branch"));
core.setOutput("fix_command", core.getInput("fix_command"));
core.setOutput("commit_message", core.getInput("commit_message"));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Reading workflow_dispatch inputs via core.getInput inside github-script is likely incorrect.

In the workflow_dispatch branch, this script calls core.getInput for target_branch, fix_command, and commit_message, but those are workflow-level inputs, not inputs to this github-script step. In github-script you should read them from context.payload.inputs (e.g. context.payload.inputs.target_branch) or pass them into the step via with:/env:. As written, these values will be empty, so the resolved outputs will be blank and later steps (checkout, fix command) are likely to fail.

Comment on lines 476 to +485
@pytest.mark.asyncio
async def test_openclaude_ask_defers_and_responds():
"""OpenClaudePlugin.ask defers and responds."""
"""OpenClaudePlugin.ask shows a localized thinking message and edits it."""
plugin = OpenClaudePlugin(api_key="test-key")
ctx = MagicMock()
ctx.defer = AsyncMock()
ctx.t = MagicMock(return_value="Thinking locally...")
ctx.respond = AsyncMock()
ctx.edit_response = AsyncMock()
ctx.user.id = 123
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add a test to cover rate-limit window expiry behavior

This test only asserts that a second request within the window is blocked. It doesn’t verify that requests are allowed again after the window elapses. Because _rate_limit_retry_after depends on time.monotonic() and rate_window, a regression there could go unnoticed. Please add a test that controls time.monotonic() (or uses a fake clock) to:

  1. Allow initial rate_limit calls.
  2. Advance time beyond rate_window.
  3. Assert that a subsequent call is allowed (i.e., provider.query is called again and no rate-limit message is sent).

Suggested implementation:

    ctx.user.id = 123
    ctx.guild_id = 456

    await plugin.ask(ctx, prompt="test")



    await plugin.ask(ctx, prompt="test")

    ctx = MagicMock()
    ctx.defer = AsyncMock()
    ctx.respond = AsyncMock()
    ctx.user.id = 123
    ctx.guild_id = 456

    await plugin.ask(ctx, prompt="test")


@pytest.mark.asyncio
async def test_aiplugin_rate_limits_reset_after_window(monkeypatch):
    """AIPlugin.ask allows requests again after the rate-limit window expires."""
    provider = MagicMock(spec=AIProvider)
    provider.query = AsyncMock(return_value="AI response")
    plugin = AIPlugin(provider=provider, rate_limit=1, rate_window=60)

    # Control time.monotonic so we can advance time in the test
    fake_time = 1000.0

    def fake_monotonic() -> float:
        return fake_time

    # NOTE: adjust the target string below if AIPlugin's module path differs.
    monkeypatch.setattr("openclaude.plugin.time.monotonic", fake_monotonic)

    ctx = MagicMock()
    ctx.defer = AsyncMock()
    ctx.respond = AsyncMock()
    ctx.t = MagicMock()
    ctx.user.id = 123
    ctx.guild_id = 456

    # First call should be allowed
    await plugin.ask(ctx, prompt="first")
    assert provider.query.await_count == 1
    ctx.t.assert_not_called()

    # Second call within the window should be rate-limited
    await plugin.ask(ctx, prompt="second")
    assert provider.query.await_count == 1
    ctx.t.assert_called_with(
        "ai.rate_limited",
        default="You're asking too quickly. Try again in {seconds:.0f}s.",
        seconds=pytest.approx(60, abs=1),
    )

    # Advance time beyond the rate window
    fake_time += 61

    # After the window, a new call should be allowed again (no rate-limit message)
    ctx.t.reset_mock()
    await plugin.ask(ctx, prompt="third")
    assert provider.query.await_count == 2
    ctx.t.assert_not_called()
  1. The monkeypatch.setattr target "openclaude.plugin.time.monotonic" assumes that:
    • The AIPlugin implementation lives in openclaude/plugin.py, and
    • It calls time.monotonic via an imported time module.
      If instead the code does from time import monotonic, change the target to "openclaude.plugin.monotonic", and if the module path differs, update "openclaude.plugin" accordingly.
  2. This test relies on pytest, MagicMock, AsyncMock, AIProvider, and AIPlugin already being imported in tests/test_openclaude_plugin.py as in the surrounding tests. If any of these are missing, add the appropriate imports at the top of the file.

@augmentcode
Copy link
Copy Markdown

augmentcode Bot commented Apr 28, 2026

🤖 Augment PR Summary

Summary: This PR adds an automated “issue auto-fix” workflow and improves the framework’s AI integration story.

Changes:

  • Introduces a GitHub Actions Auto-fix Issues workflow that can be manually dispatched or triggered via /fix-issues PR comments, then runs a deterministic fix hook, tests, and a build before pushing results.
  • Adds Bot(ai_provider=...) to store a shared AI provider on the bot instance.
  • Adds await ctx.ai(...) as a convenience helper to query the configured provider, with an optional temporary model override.
  • Extends AIPlugin / OpenClaudePlugin with per-user, per-guild rate limiting and localized cooldown messaging.
  • Makes OpenClaudePlugin show a localized “thinking” response first, then edit that message with the final model output.
  • Updates README/API docs/examples and adds tests for the new AI helper and rate limiting behaviors.

Technical Notes: The context helper temporarily mutates provider _model under an async lock; the workflow validates via pytest and python -m build before pushing commits.

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 5 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

with:
script: |
if (context.eventName === "workflow_dispatch") {
core.setOutput("branch", core.getInput("target_branch"));
Copy link
Copy Markdown

@augmentcode augmentcode Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.github/workflows/auto-fix-issues.yml:38 — In workflow_dispatch, core.getInput(...) reads action inputs, not workflow_dispatch inputs, so branch/fix_command/commit_message will likely be empty and the workflow won’t behave as intended. You probably want to read from context.payload.inputs / github.event.inputs instead of core.getInput.

Severity: high

Other Locations
  • .github/workflows/auto-fix-issues.yml:39
  • .github/workflows/auto-fix-issues.yml:40

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.


jobs:
auto-fix:
if: github.event_name == 'workflow_dispatch' || contains(github.event.comment.body, '/fix-issues')
Copy link
Copy Markdown

@augmentcode augmentcode Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.github/workflows/auto-fix-issues.yml:28 — The /fix-issues trigger doesn’t check who authored the comment; any user who can comment on a PR could trigger a workflow run that has contents: write and can push commits to in-repo branches. Consider restricting this to trusted actors (e.g., repo members) to avoid abuse/spam and unexpected pushes.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

- name: Commit and push fixes
shell: bash
run: |
if git diff --quiet && git diff --cached --quiet; then
Copy link
Copy Markdown

@augmentcode augmentcode Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.github/workflows/auto-fix-issues.yml:91git diff --quiet ignores untracked files, so fixes that add new files could be missed and the workflow would incorrectly exit with “No fixes produced.” This can cause auto-fix changes to silently not be committed/pushed.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

response_text = await self._provider.query(prompt)
await ctx.respond(self._format_response(response_text))
response = self._format_response(response_text)
if self._thinking_key:
Copy link
Copy Markdown

@augmentcode augmentcode Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

easycord/plugins/openclaude.py:114 — When thinking_key is enabled you edit the initial “Thinking…” message only on the success path; the except blocks still call ctx.respond(...), which will send follow-ups and can leave the original thinking message permanently visible. Consider aligning error handling with the edit pattern so the initial message isn’t left stale.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

self._rate_limit = rate_limit
self._rate_window = rate_window
self._thinking_key = thinking_key
self._requests: dict[tuple[int | None, int], list[float]] = {}
Copy link
Copy Markdown

@augmentcode augmentcode Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

easycord/plugins/openclaude.py:52_requests retains an entry per (guild,user) indefinitely; even though you filter old timestamps, the dict keys themselves are never removed for inactive users, which can lead to unbounded growth in long-running bots. Consider pruning empty/expired keys to avoid a slow memory leak.

Severity: low

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Comment thread easycord/_context_base.py
Comment on lines +137 to +161
async def ai(self, prompt: str, *, provider=None, model: str | None = None) -> str:
"""Query the configured AI provider and return response text.

Pass ``provider=...`` for one-off calls, or configure ``Bot(ai_provider=...)``
so commands can call ``await ctx.ai("...")`` directly.
"""
provider = provider or getattr(self.interaction.client, "ai_provider", None)
if provider is None:
raise RuntimeError("No AI provider configured. Pass provider=... or set Bot(ai_provider=...).")

old_model = getattr(provider, "_model", None)
should_restore = model is not None and hasattr(provider, "_model")
if not should_restore:
return await provider.query(prompt)

lock = getattr(provider, "_easycord_model_lock", None)
if lock is None:
lock = asyncio.Lock()
provider._easycord_model_lock = lock
async with lock:
provider._model = model
try:
return await provider.query(prompt)
finally:
provider._model = old_model
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. ctx.ai() uncaught provider errors 📎 Requirement gap ☼ Reliability

ctx.ai() directly awaits provider.query() and can raise exceptions (including missing provider)
that bubble up and may crash the command instead of surfacing a user-safe error. It also provides no
integration point for consistent rate limiting, so custom commands using ctx.ai() can bypass the
shared AI rate limiting behavior.
Agent Prompt
## Issue description
`ctx.ai()` can raise unhandled exceptions from missing provider configuration and from `provider.query()`, and it has no shared rate limiting integration point.

## Issue Context
Compliance requires `ctx.ai()` to be a safe, unified entry point with built-in error handling and a consistent place to enforce/govern rate limits.

## Fix Focus Areas
- easycord/_context_base.py[137-161]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread README.md
Comment on lines +109 to +116
from easycord.plugins import OpenAIProvider

bot = Bot(ai_provider=OpenAIProvider(api_key="sk-..."))

@bot.slash(description="Ask AI")
async def ask(ctx, prompt: str):
response = await ctx.ai(prompt, model="gpt-4o")
await ctx.respond(response[:2000])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Docs show api_key= literals 📘 Rule violation ⛨ Security

Documentation examples add inline api_key="sk-..." values, which violates the requirement to avoid
hardcoded secrets and to use environment variables/secure configuration for credentials. Even as
placeholders, these examples encourage embedding API keys in code.
Agent Prompt
## Issue description
Docs include inline `api_key="sk-..."` examples, which conflicts with the no-hardcoded-secrets requirement.

## Issue Context
Examples should demonstrate reading API keys from environment variables (e.g., `os.getenv(...)`) or documented secure configuration patterns.

## Fix Focus Areas
- README.md[109-116]
- docs/api.md[662-673]
- docs/examples.md[84-95]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread easycord/_context_base.py
Comment on lines +137 to +161
async def ai(self, prompt: str, *, provider=None, model: str | None = None) -> str:
"""Query the configured AI provider and return response text.

Pass ``provider=...`` for one-off calls, or configure ``Bot(ai_provider=...)``
so commands can call ``await ctx.ai("...")`` directly.
"""
provider = provider or getattr(self.interaction.client, "ai_provider", None)
if provider is None:
raise RuntimeError("No AI provider configured. Pass provider=... or set Bot(ai_provider=...).")

old_model = getattr(provider, "_model", None)
should_restore = model is not None and hasattr(provider, "_model")
if not should_restore:
return await provider.query(prompt)

lock = getattr(provider, "_easycord_model_lock", None)
if lock is None:
lock = asyncio.Lock()
provider._easycord_model_lock = lock
async with lock:
provider._model = model
try:
return await provider.query(prompt)
finally:
provider._model = old_model
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Ctx.ai model race 🐞 Bug ≡ Correctness

BaseContext.ai() mutates provider._model under a lock only for override calls; concurrent ctx.ai()
calls without model bypass that lock and can run while _model is temporarily changed, sending
requests with the wrong model.
Agent Prompt
### Issue description
`BaseContext.ai()` temporarily overrides `provider._model` while awaiting `provider.query()`, but only the override path uses a lock. Any concurrent `ctx.ai()` call without `model=...` can observe the overridden `_model` and issue a request with the wrong model.

### Issue Context
- The override path uses `provider._easycord_model_lock`, but the non-override path does not.
- Built-in providers (e.g. `OpenAIProvider`) read `self._model` during `query()`.

### Fix Focus Areas
- easycord/_context_base.py[137-161]
- easycord/plugins/_ai_providers.py[67-103]

### Suggested fix
- Prefer: change provider interface to accept `model` as an argument (no shared state mutation).
- If keeping mutation: acquire the same lock for *all* `provider.query()` calls when the provider has `_model` (and move `old_model = ...` inside the locked region).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread easycord/_context_base.py
Comment on lines +147 to +161
old_model = getattr(provider, "_model", None)
should_restore = model is not None and hasattr(provider, "_model")
if not should_restore:
return await provider.query(prompt)

lock = getattr(provider, "_easycord_model_lock", None)
if lock is None:
lock = asyncio.Lock()
provider._easycord_model_lock = lock
async with lock:
provider._model = model
try:
return await provider.query(prompt)
finally:
provider._model = old_model
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Model override persists 🐞 Bug ≡ Correctness

ctx.ai(model=...) can permanently change behavior for providers that bind the model when
initializing their SDK client (e.g., Gemini/HuggingFace): a one-off override can create a client for
the custom model and then restore only provider._model, leaving provider._client configured for the
wrong model thereafter.
Agent Prompt
### Issue description
`ctx.ai(..., model=...)` implements a “temporary” model override by mutating `provider._model`, but it does not account for providers that bind model into `provider._client` at initialization time.

For such providers, the override can either:
- Have no effect (if `_client` already exists), or
- Persist beyond the call (if `_client` is created during the override), leaving the provider permanently using the wrong model.

### Issue Context
- `GeminiProvider._init_client()` creates `GenerativeModel(self._model)` once and later queries use `_client.generate_content(...)`.
- `HuggingFaceProvider._init_client()` creates `InferenceClient(model=self._model)` once and later queries use `_client.text_generation(...)`.

### Fix Focus Areas
- easycord/_context_base.py[147-161]
- easycord/plugins/_ai_providers.py[122-141]
- easycord/plugins/_ai_providers.py[274-295]

### Suggested fix
One of:
1) Add a supported `query(prompt, *, model=None)` API so model is per-call.
2) Implement a provider hook like `_set_model_temporarily(model)` that also resets/rebuilds `_client` safely.
3) At minimum, in `ctx.ai` override path: snapshot and restore both `_model` and `_client` (and any other model-bound state) under the same lock.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +92 to 120
retry_after = self._rate_limit_retry_after(ctx)
if retry_after is not None:
await ctx.respond(
ctx.t(
"ai.rate_limited",
default="You're asking too quickly. Try again in {seconds:.0f}s.",
seconds=retry_after,
),
ephemeral=True,
)
return

if self._thinking_key:
await ctx.respond(
ctx.t(self._thinking_key, default="Thinking..."),
)
else:
await ctx.defer()

try:
response_text = await self._provider.query(prompt)
await ctx.respond(self._format_response(response_text))
response = self._format_response(response_text)
if self._thinking_key:
await ctx.edit_response(response)
else:
await ctx.respond(response)

except ImportError as exc:
await ctx.respond(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Thinking errors not edited 🐞 Bug ≡ Correctness

When AIPlugin.ask() uses a thinking message (OpenClaudePlugin), errors are sent via
ctx.respond(ephemeral=True) as follow-ups instead of editing the original thinking response, leaving
a stale “Thinking…” message behind.
Agent Prompt
### Issue description
When `_thinking_key` is set, `AIPlugin.ask()` sends an initial thinking response and then uses `ctx.edit_response()` only on success. On errors, it calls `ctx.respond(..., ephemeral=True)` which becomes a follow-up, leaving the original thinking message stale.

### Issue Context
- `BaseContext.respond()` sends follow-ups after the first response.
- With thinking enabled, the first response is the thinking text.

### Fix Focus Areas
- easycord/plugins/openclaude.py[104-135]
- easycord/_context_base.py[101-124]

### Suggested fix
In both `except ImportError` and `except Exception` blocks:
- If `_thinking_key` is set, call `await ctx.edit_response(<error text>)` (and avoid ephemeral follow-up), mirroring the success path.
- Otherwise keep existing `ctx.respond(..., ephemeral=True)` follow-up behavior after `defer()`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +28 to +100
if: github.event_name == 'workflow_dispatch' || contains(github.event.comment.body, '/fix-issues')
runs-on: ubuntu-latest

steps:
- name: Resolve target branch
id: target
uses: actions/github-script@v7
with:
script: |
if (context.eventName === "workflow_dispatch") {
core.setOutput("branch", core.getInput("target_branch"));
core.setOutput("fix_command", core.getInput("fix_command"));
core.setOutput("commit_message", core.getInput("commit_message"));
return;
}

const issue = context.payload.issue;
if (!issue.pull_request) {
core.setFailed("/fix-issues comments are only supported on pull requests.");
return;
}

const { data: pull } = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: issue.number,
});

if (pull.head.repo.full_name !== `${context.repo.owner}/${context.repo.repo}`) {
core.setFailed("Auto-fix can only push to branches in this repository.");
return;
}

core.setOutput("branch", pull.head.ref);
core.setOutput("fix_command", "python scripts/fix_issues.py");
core.setOutput("commit_message", `chore: auto-fix PR #${issue.number} issue triage`);

- name: Check out target branch
uses: actions/checkout@v4
with:
ref: ${{ steps.target.outputs.branch }}
token: ${{ secrets.GITHUB_TOKEN }}

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"

- name: Install project and build tools
run: python -m pip install --upgrade pip build && python -m pip install -e ".[dev]"

- name: Apply fixes
run: ${{ steps.target.outputs.fix_command }}

- name: Run tests
run: python -m pytest

- name: Build package
run: python -m build

- name: Commit and push fixes
shell: bash
run: |
if git diff --quiet && git diff --cached --quiet; then
echo "No fixes produced."
exit 0
fi

git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add -A
git commit -m "${{ steps.target.outputs.commit_message }}"
git push origin "HEAD:${{ steps.target.outputs.branch }}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

6. Auto-fix workflow injection 🐞 Bug ⛨ Security

The auto-fix workflow can be triggered by any issue commenter via '/fix-issues' (no permission
check) and interpolates workflow_dispatch inputs directly into bash (fix_command/commit_message),
enabling unwanted runs/pushes and potential command injection in the runner context.
Agent Prompt
### Issue description
The workflow is triggerable by any commenter (`issue_comment`) and performs a checkout + test/build + push with `contents: write`. Separately, `workflow_dispatch` inputs are interpolated directly into bash (`fix_command` and `commit_message`), which can be interpreted as shell syntax.

### Issue Context
- `if:` gate only checks `contains(comment.body, '/fix-issues')`.
- `git commit -m "${{ steps.target.outputs.commit_message }}"` inserts an arbitrary string into the script body.

### Fix Focus Areas
- .github/workflows/auto-fix-issues.yml[18-29]
- .github/workflows/auto-fix-issues.yml[79-81]
- .github/workflows/auto-fix-issues.yml[88-100]

### Suggested fix
- Require trusted actors for comment trigger (e.g., `author_association` in OWNER/MEMBER/COLLABORATOR, or check PR author).
- Harden interpolation:
  - Pass inputs through `env:` and reference as `"$COMMIT_MESSAGE"` so bash treats the content as data.
  - Consider removing arbitrary `fix_command` input entirely (or restrict to a fixed allowlist).
- Optionally tighten the job `if:` to avoid evaluating comment fields on non-issue_comment events (wrap with an explicit event_name check).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants