A local orchestrator app that routes user requests through a team of specialized AI agents for collaborative software engineering tasks.
This system takes a user prompt and processes it through five specialized AI agents, each contributing their expertise:
- Analyst Agent - Breaks down requirements, identifies ambiguities, defines acceptance criteria
- Architect Agent - Designs system architecture, file structure, and data models
- Coder Agent - Creates implementation plans and code sketches (MVP-focused)
- QA Agent - Reviews all outputs for consistency and conflicts
- Lead Architect Agent - Makes final APPROVE/REJECT decision with reasoning
This is a pure reasoning + collaboration MVP:
- Agents analyze, plan, and review
- No file writing or code execution
- No tool calling
- Focus on multi-agent orchestration and decision-making
- Multi-agent pipeline with specialized roles
- Cross-agent review and validation
- Support for both OpenAI and Claude models
- Configurable model assignment per agent
- Structured approval/rejection workflow
- Detailed logging of pipeline execution
npm installCopy .env.example to .env:
cp .env.example .envAdd your API keys to .env:
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
Edit config/agents.config.json to customize which model each agent uses:
{
"analyst": {
"model": "claude-3-5-sonnet-20241022",
"provider": "anthropic"
},
"architect": {
"model": "gpt-4o",
"provider": "openai"
},
"coder": {
"model": "gpt-4o",
"provider": "openai"
},
"qa": {
"model": "claude-3-5-sonnet-20241022",
"provider": "anthropic"
},
"leadArchitect": {
"model": "gpt-4o",
"provider": "openai"
}
}npm start "your prompt here"npm start "build me a CRUD microservice for users"npm start "review this authentication system design and suggest improvements"npm start "plan the architecture for a real-time chat application"� APPROVED � MVP architecture + code outline validated.
[Reasoning from Lead Architect]
FINAL DELIVERABLE:
[Combined outputs from all agents]
� REJECTED � Issues detected that need resolution.
[Reasoning from Lead Architect]
REQUIRED FIXES:
1. [Specific fix needed]
2. [Specific fix needed]
...
/project-root
/src
/agents # AI agent implementations
baseAgent.ts # Base class for all agents
analystAgent.ts
architectAgent.ts
coderAgent.ts
qaAgent.ts
leadArchitectAgent.ts
/orchestrator # Pipeline and routing logic
pipeline.ts # Main execution pipeline
router.ts # Agent routing logic
/models # TypeScript type definitions
agentTypes.ts
messageTypes.ts
/utils # Utility functions
logger.ts # Logging utility
promptBuilder.ts # Builds agent-specific prompts
reviewCombiner.ts # Combines and parses outputs
env.ts # Environment variable handling
/clients # API client implementations
openaiClient.ts
claudeClient.ts
/config # Configuration files
agents.config.json # Agent model assignments
model.config.json # Model provider settings
/tests # Test files
index.ts # CLI entry point
package.json
tsconfig.json
.env.example
README.md
- User Input: You provide a prompt via CLI
- Router: Determines agent execution order (fixed for MVP: Analyst ’ Architect ’ Coder ’ QA ’ Lead)
- Agent Execution: Each agent runs sequentially, building on previous outputs
- Context Building: Each agent's output is added to the pipeline context
- Final Decision: Lead Architect reviews all outputs and makes APPROVE/REJECT decision
- Output: Formatted result is displayed to user
Each agent receives:
- Original user prompt
- Relevant outputs from previous agents
- Role-specific instructions
The PromptBuilder class constructs appropriate prompts for each agent role.
The ReviewCombiner parses the Lead Architect's output to extract:
- Decision status (APPROVED/REJECTED)
- Reasoning
- Final deliverable (if approved)
- Revision instructions (if rejected)
npm run devnpm run buildnpm test- Persistent project memory
- File-writing capabilities
- Multi-run refinement cycles
- Agent "threads" per project
- Vector memory for context
- Web UI
- Tool calling support
- Code execution sandbox
- Integration with version control
If you see an error about missing environment variables:
- Ensure
.envfile exists (copy from.env.example) - Add valid API keys for both OpenAI and Claude
- Restart the application
If you get model errors:
- Check
config/agents.config.jsonfor valid model names - Ensure you have access to the specified models
- Update model names to match your API access
If you encounter TypeScript import errors:
npm install
npm run buildMIT
This is an MVP. Contributions welcome for:
- Additional agent roles
- Enhanced prompt engineering
- Better error handling
- Test coverage
- Documentation improvements