Multi-model consensus coding workflow — Orchestrated AI deliberation for generating high-quality code through structured debate.
Consensus Coder orchestrates multiple AI coding tools to debate and reach consensus on complex coding challenges. By combining different tools' strengths — a context engine (like Auggie) understands your codebase and generates solutions, while reviewers (like Gemini and Codex) independently evaluate those solutions — you get robust, well-reasoned decisions grounded in your actual code.
The system is tool-agnostic: you choose which tools participate (Auggie, Claude Code, Pi, OpenCode, Codex, Gemini, Llama, etc.), and each tool brings its own context understanding and reasoning style to the debate.
- 🤖 Multi-Tool Orchestration — Coordinates multiple AI coding tools in structured debate
- 🎯 Consensus-Driven — Iterates until tools agree (or escalates for human review)
- 💾 State Persistence — Full debate state saved to disk for recovery and audit
- 🔄 Automatic Retry — Handles transient failures and rate limits gracefully
- 📝 Consensus Spec Output — Generates detailed markdown specification ready for any coding agent
- 📊 Comprehensive Logging — Track every decision point and voting outcome
- ⚙️ Highly Configurable — Tune debate rounds, voting thresholds, tool selection, voting weights
- 🎁 Agent-Agnostic — Works with Claude Code, Auggie, Pi, Codex, or any coding agent
- AGENT_GUIDE.md — For AI agents: how to use this tool effectively, when to use it, common patterns
- README.md (this file) — Installation, API reference, configuration, examples
- SKILL.md — Skill description and quick reference
Consensus Coder orchestrates AI coding tools, so you'll need to have your chosen tools installed and working first.
Required:
- Node.js 18+ (to run consensus-coder itself)
- npm (to install dependencies)
Tool-Specific Prerequisites:
| Tool | How to Set Up | Used For |
|---|---|---|
| Auggie | npm install -g @augmentcode/cli |
Context engine (codebase analysis) |
| Claude Code | Install via claude code CLI |
Context or reviewer |
| Gemini | GOOGLE_AI_STUDIO_API_KEY env var |
Reviewer (analysis) |
| Codex/OpenAI | OPENAI_API_KEY env var |
Reviewer (code-specific analysis) |
| Pi | pi CLI installed |
Context or reviewer |
| OpenCode | Your custom setup | Any role |
Minimal Setup (Recommended for Beginners):
# Install Node.js
# Then install Auggie (context engine)
npm install -g @augmentcode/cli
# Set API keys for reviewers
export GOOGLE_AI_STUDIO_API_KEY=your_key_here
export OPENAI_API_KEY=your_key_here
# Now consensus-coder can use Auggie + Gemini + CodexCheck Prerequisites:
node --version # Should be 18+
npm --version # Should be 6+
auggie --version # If using Auggie
which pi # If using Pi
echo $OPENAI_API_KEY # If using OpenAI/CodexIf any tools are missing, install them before running consensus-coder.
Choose your setup:
Use consensus-coder as a standalone CLI tool (no Clawdbot required):
Clone & Use Locally
# Clone the repo
git clone https://github.com/bsharpe/consensus-coder.git
cd consensus-coder
# Install dependencies
npm install
# Build the project
npm run build
# Start a consensus debate
npm start -- --problem "Design a rate limiter"
# Generate spec when done
npm start -- --spec <debateId> --output my-spec.md
# Hand to any implementation agent
claude exec --file my-spec.md
auggie --instruction-file my-spec.md
pi --prompt "$(cat my-spec.md)"Add consensus-coder as a Clawdbot skill:
# In your Clawdbot workspace
git clone https://github.com/bsharpe/consensus-coder.git skills/consensus-coder
cd skills/consensus-coder
npm install
npm run buildThen use from Clawdbot:
# Register the skill
clawdbot skill install ./skills/consensus-coder
# Use it
clawdbot skill run consensus-coder --problem "Design X"
# Or from another Clawdbot agent
import { ConsensusCoder } from './skills/consensus-coder/dist/index.js';
const coder = new ConsensusCoder();Verify installation:
clawdbot skill list | grep consensus-codergit clone https://github.com/bsharpe/consensus-coder.git
cd consensus-coder
npm install
npm run buildStart a consensus debate and generate a spec, all from the command line:
# Start a debate (interactive)
npm start -- --problem "Design an efficient algorithm to merge K sorted linked lists"
# Wait for consensus... (shown in terminal)
# Once converged, get the spec
npm start -- --spec <debateId> --output merge-spec.md
# Now use with any agent
claude exec --file merge-spec.mdUse as a library in your code:
import { ConsensusCoder } from '@clawdbot/consensus-coder-skill';
// Initialize
const coder = new ConsensusCoder({
workspace: './debate-workspace',
debug: true, // See all votes and reasoning
});
// Start a debate
const result = await coder.startConsensus({
problem: 'Design an efficient algorithm to merge K sorted linked lists',
context: {
constraints: 'Time: O(n log k), Space: O(1)',
language: 'TypeScript',
},
});
console.log('Debate started:', result.debateId);
// Poll for completion
let status = await coder.getDebateStatus(result.debateId);
while (status.status !== 'converged' && status.status !== 'escalated') {
console.log(`Round ${status.iteration}: uncertainty=${status.uncertaintyLevel?.toFixed(2)}`);
await new Promise(r => setTimeout(r, 5000));
status = await coder.getDebateStatus(result.debateId);
}
// Get consensus result
const finalResult = await coder.getConsensusResult(result.debateId);
console.log('✅ Winner:', finalResult.winningApproach);
console.log('📊 Confidence:', (finalResult.confidence * 100).toFixed(0) + '%');
// Generate markdown spec for implementation
const spec = await coder.getConsensusSpec(result.debateId);
console.log('\n📝 Consensus Spec:\n', spec);
// Save for any agent to use
import fs from 'fs';
fs.writeFileSync('consensus-spec.md', spec);
console.log('\n✨ Spec saved to consensus-spec.md');
console.log('Ready to implement with: claude exec --file consensus-spec.md');# 1. Clone repo
git clone https://github.com/bsharpe/consensus-coder.git
cd consensus-coder
npm install && npm run build
# 2. Start a debate
npm start -- --problem "Design a cache with O(1) get/put" --debug
# Terminal shows:
# Round 1: Opus proposes 3 approaches
# Gemini votes...
# Codex votes...
# [continues until consensus]
# 3. Generate spec when done
npm start -- --spec <debateId> --output cache-spec.md
# 4. Implement with any agent
claude exec --file cache-spec.md
# or
auggie --instruction-file cache-spec.md# 1. Install as Clawdbot skill
cd ~/my-clawdbot-workspace
git clone https://github.com/bsharpe/consensus-coder.git skills/consensus-coder
npm install -C skills/consensus-coder
npm run build -C skills/consensus-coder
# 2. Use from Clawdbot
clawdbot skill run consensus-coder \
--problem "Design a rate limiter" \
--wait \
--spec my-rate-limiter-spec.md
# 3. Clawdbot generates the spec, saves to file
# 4. Your other tools consume it
cat my-rate-limiter-spec.md | \
claude exec --// In your own Node.js project
import { ConsensusCoder } from '@clawdbot/consensus-coder-skill';
const coder = new ConsensusCoder();
const result = await coder.startConsensus({
problem: 'Your problem here',
});
// ... wait for consensus ...
const spec = await coder.getConsensusSpec(result.debateId);
// Use spec however you want| Component | Purpose |
|---|---|
| ConsensusOrchestrator | Manages the debate workflow (rounds, voting, convergence) |
| SynthesisEngine | Aggregates model votes and scores approaches |
| StateStore | Persists debate state to disk for recovery |
| RetryOrchestrator | Handles transient failures and rate limits |
| SpecGenerator | Converts consensus results to markdown spec documents |
graph TD
A["📥 Problem Intake"] --> B["🔧 Context Engine<br/>(Configurable: Auggie, Claude Code, Pi, etc.)"]
B --> C["📋 3 Proposed Approaches<br/>(Grounded in Your Codebase)"]
C --> D{{"🔄 Round Loop<br/>Max 5 Iterations"}}
D --> E["🔍 Reviewer #1<br/>(Configurable Tool)<br/>Analyzes & Votes"]
E --> F["🔍 Reviewer #2<br/>(Configurable Tool)<br/>Analyzes & Votes"]
F --> G["📊 Synthesis Engine<br/>Apply Voting Weights<br/>Score Approaches"]
G --> H{{"Uncertainty<br/>Below Threshold?"}}
H -->|Yes| D
H -->|No| I{{"Consensus<br/>Reached?"}}
I -->|Yes| J["✅ Converged"]
I -->|No| K["⚠️ Escalate to Human"]
K --> M["👤 Human Review<br/>& Decision"]
M --> J
J --> L["📝 Generate Consensus Spec<br/>Markdown Document"]
L --> N["📄 Spec Output<br/>Problem + All Approaches<br/>+ Winner + Acceptance Criteria"]
N --> O{{"Choose Implementation Agent"}}
O -->|Claude Code| P["claude exec --file spec.md"]
O -->|Auggie| Q["auggie --instruction-file spec.md"]
O -->|Pi Agent| R["pi --prompt spec.md"]
O -->|Any Other| S["Feed to your agent"]
P --> T["🔨 Implementation"]
Q --> T
R --> T
S --> T
T --> U["✨ Done"]
Consensus Coder is tool-agnostic: you choose which AI coding tools participate in the debate. The system separates roles:
-
Context Engine — One tool that understands your codebase and generates 3 solution proposals
- Default:
auggie(has built-in codebase analysis) - Also available:
claude-code,pi,opencode,codex,gemini,llama
- Default:
-
Reviewers — One or more tools that independently analyze and vote on the proposals
- Default:
gemini,codex - Can be any tool:
auggie,opencode,pi,claude-code, etc.
- Default:
-
Voting Weights — Configure how much each tool's vote counts
- Context engine gets extra weight (knows your code)
- Customize reviewers' influence
In config file (consensus-coder.config.json):
{
"tools": {
"preferredContextEngine": "auggie",
"reviewers": ["gemini", "codex"],
"votingWeights": {
"auggie": 1.5,
"gemini": 1.0,
"codex": 1.0
}
}
}Via CLI flags:
npm start -- --problem "Design X" \
--use-adapters \
--context-engine auggie \
--reviewers gemini,codex,opencode \
--weights "auggie:2.0,gemini:1.0,codex:0.8"Programmatically:
const result = await coder.startConsensus({
problem: 'Your problem here',
config: {
useToolAdapters: true,
tools: {
preferredContextEngine: 'auggie',
reviewers: ['gemini', 'codex'],
votingWeights: { auggie: 1.5, gemini: 1.0, codex: 1.0 }
}
}
});| Tool | Best For | Context Engine | Reviewer |
|---|---|---|---|
| Auggie | Deep codebase analysis | ✅ (excellent) | ✅ (good) |
| Claude Code | General reasoning | ✅ (good) | ✅ (excellent) |
| Gemini | Fast evaluation | ❌ | ✅ (very good) |
| Codex | Code-specific reasoning | ✅ (good) | ✅ (very good) |
| Pi | Architectural thinking | ✅ (good) | ✅ (good) |
| OpenCode | Custom workflows | ✅ (configurable) | ✅ (configurable) |
| Llama | Self-hosted option | ✅ (configurable) | ✅ (configurable) |
Use Claude Code for context analysis, Auggie + Gemini for review:
npm start -- --problem "Add caching layer" \
--use-adapters \
--context-engine claude-code \
--reviewers auggie,gemini \
--weights "claude-code:1.5,auggie:1.2,gemini:1.0"This workflow:
- Claude Code analyzes your codebase, proposes 3 caching strategies
- Auggie reviews proposals (knows how to implement in your stack)
- Gemini reviews proposals (checks for best practices)
- Weighted votes determine winner
- Consensus spec generated, ready for any implementation agent
Main skill class. Manages the complete consensus workflow.
constructor(options?: ConsensusCoderOptions)Options:
workspace(string, optional) — Working directory for skill state- Default:
~/.clawdbot/consensus-coder
- Default:
debug(boolean, optional) — Enable debug logging- Default:
false
- Default:
config(ConsensusCoderConfig, optional) — Full configuration override- Default: Load from
consensus-coder.config.json
- Default: Load from
startConsensus(request: ConsensusRequest): Promise<StartConsensusResponse>Start a new consensus debate.
Parameters:
request.problem— The coding problem to solverequest.context— Additional context (constraints, language, etc.)request.maxRounds— Maximum debate rounds (default: 5)request.convergenceThreshold— Certainty required to converge (default: 0.85)
Returns:
{
debateId: string; // Unique debate identifier
status: 'started' | 'error';
message: string;
timestamp: Date;
}getDebateStatus(debateId: string): Promise<DebateStatusResponse>Get the current status of an ongoing debate.
Returns:
{
debateId: string;
status: 'pending' | 'in_progress' | 'converged' | 'escalated' | 'not_found';
iteration: number;
lastUpdate: Date;
votingScore?: number;
uncertaintyLevel?: number;
winningApproach?: string;
estimatedTimeRemainingMs?: number;
}getConsensusResult(debateId: string): Promise<ConsensusResult>Retrieve the final consensus result.
Returns:
{
debateId: string;
winningApproach: string;
confidence: number;
iterations: number;
totalTimeMs: number;
approaches: ApproachDetail[];
votes: VoteHistory[];
escalated: boolean;
}getConsensusSpec(debateId: string): Promise<string>Generate a detailed markdown specification document from the consensus result. This spec can be handed to any coding agent for implementation.
Returns: Markdown string with:
- Problem statement & context
- All 3 proposed approaches with pros/cons
- Voting history & reasoning
- Winning approach with full explanation
- Acceptance criteria
- Implementation guidelines
- Example skeleton code (if applicable)
Example output:
# Consensus Spec: [Problem Title]
## Problem Statement
[Detailed problem description]
## Context & Constraints
- [Constraint 1]
- [Constraint 2]
## Proposed Approaches
### Approach A: [Name]
**Pros:**
- [Pro 1]
**Cons:**
- [Con 1]
### Approach B: [Name]
...
## Consensus Decision
**Winner:** Approach [X]
**Confidence:** 92%
**Why:** [Detailed explanation of voting and reasoning]
## Acceptance Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
## Implementation Guidelines
[Specific technical guidance]
## Example Skeleton
\`\`\`typescript
// Your code here
\`\`\`Usage: Pass this markdown to any coding agent:
# Using Claude Code
claude exec --file consensus-spec.md
# Using Auggie
auggie --print --instruction-file consensus-spec.md
# Using Pi Coding Agent
pi --prompt "Implement this spec: $(cat consensus-spec.md)"
# Or any other agent...interface ConsensusCoderConfig {
// Debate behavior
maxRounds: number; // Max consensus rounds (default: 5)
convergenceThreshold: number; // Certainty to converge (default: 0.85)
votingWeights: Record<string, number>; // Model voting weights
// Timeouts
debateTimeoutMs: number; // Max total debate time
roundTimeoutMs: number; // Max time per round
// API configuration
models: {
diagnosis: string; // Model for initial diagnosis (Opus)
reviewer1: string; // First reviewer (Gemini)
reviewer2: string; // Second reviewer (Codex)
};
// Persistence
persistenceDir: string; // Where to save state
retentionDays: number; // Keep debates for N days
}Consensus Coder requires API keys to access the three models (Claude Opus, Gemini, Codex). You can use direct API keys or OpenRouter (which proxies multiple providers with a single API key).
Required API keys:
| Model | Provider | Required? | How to Get |
|---|---|---|---|
| Claude Opus | Anthropic | ✅ Yes | https://console.anthropic.com → API Keys |
| Gemini | Google AI | ✅ Yes | https://aistudio.google.com → Get API Key |
| Codex | OpenAI | ✅ Yes | https://platform.openai.com → API Keys |
Environment variables:
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_AI_API_KEY=gsk-...
OPENAI_API_KEY=sk-...OpenRouter is a proxy that handles multiple AI providers with a single API key. Useful for testing, cost management, or if you don't have all individual keys.
Setup:
- Create an account at https://openrouter.ai
- Get your API key from settings
- Set environment variable:
OPENROUTER_API_KEY=sk-or-...
Cost: OpenRouter charges per token + 10-20% markup on direct prices. Useful for trying out without setting up individual accounts.
Limitation: OpenRouter may have rate limits or model availability — verify it supports the models you need.
Create a .env file (copy from .env.example):
# Clawdbot
CLAWDBOT_WORKSPACE=~/.clawdbot
# OPTION 1: Direct API Keys
# ANTHROPIC_API_KEY=sk-ant-...
# GOOGLE_AI_API_KEY=gsk-...
# OPENAI_API_KEY=sk-...
# OPTION 2: OpenRouter (alternative)
OPENROUTER_API_KEY=sk-or-...
# Debate Configuration
CONSENSUS_MAX_ROUNDS=5
CONSENSUS_CONVERGENCE_THRESHOLD=0.85
CONSENSUS_DEBATE_TIMEOUT_MS=300000
CONSENSUS_ROUND_TIMEOUT_MS=60000
# Persistence
CONSENSUS_PERSISTENCE_DIR=~/.clawdbot/consensus-coder/debates
CONSENSUS_RETENTION_DAYS=30Edit consensus-coder.config.json:
Using Direct API Keys:
{
"maxRounds": 5,
"convergenceThreshold": 0.85,
"votingWeights": {
"gemini": 1.0,
"codex": 1.0,
"opus": 1.5
},
"debateTimeoutMs": 300000,
"roundTimeoutMs": 60000,
"models": {
"diagnosis": "claude-opus",
"reviewer1": "gemini-2.5-pro",
"reviewer2": "gpt-4-turbo"
},
"persistenceDir": "~/.clawdbot/consensus-coder/debates",
"retentionDays": 30
}Using OpenRouter (alternative):
{
"maxRounds": 5,
"convergenceThreshold": 0.85,
"votingWeights": {
"gemini": 1.0,
"codex": 1.0,
"opus": 1.5
},
"debateTimeoutMs": 300000,
"roundTimeoutMs": 60000,
"authProvider": "openrouter",
"models": {
"diagnosis": "anthropic/claude-opus-4-1",
"reviewer1": "google/gemini-2.5-pro",
"reviewer2": "openai/gpt-4-turbo"
},
"persistenceDir": "~/.clawdbot/consensus-coder/debates",
"retentionDays": 30
}import { ConsensusCoder } from '@clawdbot/consensus-coder-skill';
const coder = new ConsensusCoder();
const result = await coder.startConsensus({
problem: 'Implement a binary search tree with in-order traversal',
context: {
language: 'TypeScript',
constraints: 'Must handle duplicate values',
performanceTarget: 'O(log n) average case',
},
});
// Wait for consensus
let status = await coder.getDebateStatus(result.debateId);
while (status.status !== 'converged' && status.status !== 'escalated') {
console.log(`Round ${status.iteration}, uncertainty: ${status.uncertaintyLevel}`);
await new Promise(r => setTimeout(r, 5000));
status = await coder.getDebateStatus(result.debateId);
}
// Get the consensus spec (markdown document)
const spec = await coder.getConsensusSpec(result.debateId);
// Save it
import fs from 'fs';
fs.writeFileSync('bst-spec.md', spec);
// Now hand it to any coding agent
console.log('Spec ready for implementation! Use any agent:');
console.log(' claude exec --file bst-spec.md');
console.log(' auggie --instruction-file bst-spec.md');const coder = new ConsensusCoder();
const existingCode = `
function sort(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = i; j < arr.length; j++) {
if (arr[j] < arr[i]) {
[arr[i], arr[j]] = [arr[j], arr[i]];
}
}
}
return arr;
}
`;
const result = await coder.startConsensus({
problem: 'Refactor this sorting function for better performance',
context: {
language: 'JavaScript',
existingCode,
constraints: 'In-place sort, stable sort preferred',
},
});
// Wait for consensus
let status = await coder.getDebateStatus(result.debateId);
while (status.status !== 'converged' && status.status !== 'escalated') {
await new Promise(r => setTimeout(r, 5000));
status = await coder.getDebateStatus(result.debateId);
}
// Get consensus spec and use with your preferred agent
const spec = await coder.getConsensusSpec(result.debateId);
fs.writeFileSync('sort-refactor-spec.md', spec);
// Share with team or pass to implementation
console.log(spec);const coder = new ConsensusCoder({
debug: true, // See detailed voting process
});
const result = await coder.startConsensus({
problem: 'Design data structure for LRU cache with O(1) operations',
context: {
language: 'Python',
constraints: [
'Capacity: configurable',
'Get/Put: O(1) average case',
'Thread-safe implementation',
],
},
});
// Poll and monitor the debate
for (let i = 0; i < 50; i++) {
const status = await coder.getDebateStatus(result.debateId);
console.log(`Round ${status.iteration}: uncertainty=${status.uncertaintyLevel?.toFixed(2)}`);
if (status.status === 'converged') {
console.log('✅ Converged!');
break;
}
if (status.status === 'escalated') {
console.log('⚠️ Need human review');
break;
}
await new Promise(r => setTimeout(r, 3000));
}
// Generate final spec regardless of escalation
const finalResult = await coder.getConsensusResult(result.debateId);
console.log('\nWinning Approach:', finalResult.winningApproach);
console.log('Confidence:', (finalResult.confidence * 100).toFixed(1) + '%');
const spec = await coder.getConsensusSpec(result.debateId);
fs.writeFileSync('lru-cache-spec.md', spec);
// Spec is ready for any agent to implementFrom within the project directory:
# Start consensus debate on a problem
npm start -- --problem "Design a rate limiter"
# With tool configuration
npm start -- --problem "Design X" \
--use-adapters \
--context-engine auggie \
--reviewers gemini,codex
# Check status of an ongoing debate
npm start -- --status <debateId>
# Get consensus result summary
npm start -- --result <debateId>
# Generate markdown spec from consensus
npm start -- --spec <debateId> --output rate-limiter-spec.md
# Run in debug mode (see all votes)
npm start -- --problem "..." --debug
# List all recent debates
npm start -- --list
# Show version
npm start -- --version
# Show help
npm start -- --helpnpx ts-node src/cli/consensus-coder-cli.ts --problem "Design X"Run the test suite:
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Generate coverage report
npm run test:coverageIncrease the weight of certain models in voting:
const config: ConsensusCoderConfig = {
// ...
votingWeights: {
'claude-opus': 1.5, // Trust Opus 50% more
'gemini': 1.0,
'gpt-4': 0.9, // Trust GPT-4 a bit less
},
};Override the convergence criteria:
const orchestrator = new ConsensusOrchestrator({
// ...
convergenceThreshold: 0.95, // Require higher confidence
});The skill includes automatic retry with exponential backoff:
const orchestrator = new ConsensusOrchestrator({
// ...
retryPolicy: {
maxAttempts: 3,
baseDelayMs: 1000,
backoffMultiplier: 2,
timeoutMs: 30000,
},
});Extend the plan generation:
class CustomPlanGenerator extends ImplementationPlanGenerator {
protected generateTasks(approach: string): ImplementationTask[] {
// Custom task generation logic
return super.generateTasks(approach);
}
}- Average debate time: 30-90 seconds (5 rounds, 2 reviewers)
- Memory usage: ~50MB for typical workloads
- Disk usage: ~1MB per debate (state + logs)
- Model API calls: ~15-20 per debate (1 diagnosis + 5 rounds × 2 reviewers + synthesis)
Symptom: 401 Unauthorized or Invalid API key
Solution:
- Verify API keys are set in
.env:echo $ANTHROPIC_API_KEY # Should not be empty echo $OPENROUTER_API_KEY
- Check key format:
- Anthropic:
sk-ant-...(notsk-...) - Google:
gsk-... - OpenAI:
sk-... - OpenRouter:
sk-or-...
- Anthropic:
- Ensure
.envfile is in the working directory (or project root) - Regenerate keys if they've been compromised
Symptom: Model not found or Model unavailable
Solution:
- Verify model names match provider conventions:
- Anthropic:
claude-opus-4-1,claude-3-sonnet - Google:
gemini-2.5-pro,gemini-1.5-pro - OpenAI:
gpt-4-turbo,gpt-4o
- Anthropic:
- If using OpenRouter, check model availability: https://openrouter.ai/docs/models
- For newer models, update model names in
consensus-coder.config.json
Symptom: Debates hit max rounds without consensus
Solution:
- Check
convergenceThreshold— may be too strict - Verify model weights are balanced
- Look at
debuglogs to see disagreement patterns - Try increasing
maxRounds
Symptom: ETIMEDOUT or ROUND_TIMEOUT
Solution:
- Increase
debateTimeoutMsandroundTimeoutMs - Check network connectivity
- Monitor API rate limits (OpenRouter has strict limits)
- Enable debug mode to see which round times out
Symptom: Error loading debate state
Solution:
- Clear
~/.clawdbot/consensus-coder/debates/ - Restart the skill
- Check disk permissions
- Review logs in
debugmode
Contributions are welcome! See CONTRIBUTING.md.
MIT — See LICENSE for details.
- GitHub Issues: Report bugs
- Documentation: Full docs
- Clawdbot Community: Discord
- ✨ Major Feature: Configurable tool system for context engines and reviewers
- ✅ Tool adapters: Support Auggie, Claude Code, Gemini, Codex, Pi, OpenCode, Llama
- ✅ Flexible architecture: Choose which tools participate in debate
- ✅ Voting weights: Configure influence of each tool
- ✅ CLI flags:
--use-adapters,--context-engine,--reviewers,--weights - ✅ Backward compatible: Legacy opus/gemini/codex flow still works as default
- ✅ 50+ unit tests passing with full coverage
- ✅ New "Tool Configuration System" documentation
- ✨ Breaking Change: Output is now markdown spec documents instead of direct Auggie integration
- ✅ Agent-agnostic spec generation (works with Claude Code, Auggie, Pi, Codex, etc.)
- ✅ More portable for standalone use
- ✅ Updated README with new workflow
- ✅ Expanded examples showing spec-based usage
- ✅ Initial release
- ✅ Multi-model orchestration
- ✅ Consensus-driven debate
- ✅ State persistence
- ✅ Automatic retry handling
- ✅ CLI interface
- ✅ Comprehensive testing
Built with ❤️ by Claude Opus, Gemini, and Codex (consensus) and the community