A TypeScript-based AI agent framework that integrates with MCP (Model Context Protocol) servers and LangChain for intelligent task execution.
- LangGraph MCP Agent (
src/agents/langgraph-mcp-agent.ts) - Main agent that handles complex task orchestration - HTTP MCP Client (
src/mcp/http-mcp-client.ts) - Communicates with Python MCP servers via HTTP - Rate-Limited LLM (
src/llm/rate-limited-llm.ts) - Inference.net LLM with rate limiting - Session Manager (
src/core/session-manager.ts) - Manages conversation context and history
β
Simple Query Input - Just provide the initial task, no complex instructions
β
LangGraph Workflow - Automatic tool selection and orchestration
β
MCP Integration - 39 tools available from Python servers
β
Session Management - Conversation continuity and context
β
Rate Limiting - Handles API limits gracefully
β
TypeScript - Full type safety and modern development experience
- Node.js 18+
- Python 3.11+
- Inference.net API key
-
Clone and install dependencies:
git clone <repository> cd ai-showmaker npm install
-
Set up Python environment:
python -m venv venv venv\Scripts\activate # Windows pip install -r requirements.txt
-
Configure environment:
cp env.example .env # Edit .env with your INFERENCE_NET_KEY -
Start the MCP bridge:
python full_mcp_bridge.py
-
Run the agent:
npx ts-node tests/integration/test_langgraph_mcp_agent.ts
The core agent follows the LangGraph workflow pattern:
- Input: Simple task query (e.g., "Solve LeetCode problem 1: Two Sum")
- Tool Discovery: Automatically discovers 39 available MCP tools
- LLM Decision: LLM decides which tools to use and when
- Tool Execution: Executes tools via HTTP MCP bridge
- Response: Provides natural language response with results
const agent = new LangGraphMCPAgent(mcpClient, llm, sessionManager);
// Simple task - LangGraph handles the rest
const result = await agent.executeComplexTask(
"Help me solve a math problem: What is 15 * 23?",
sessionId
);The agent has access to 39 tools across 5 MCP servers:
- Calculation Server: Math operations, variable management
- Development Server: File operations, code execution
- Web Search Server: Web search capabilities
- Remote Server: Remote execution and monitoring
- Monitoring Server: System monitoring and logging
test_langgraph_mcp_agent.ts- Main agent functionalitytest_agent_demo.ts- Basic agent capabilitiestest_mock_llm.ts- Mock LLM testingtest_inference_net_direct.ts- Direct LLM testing
test_all_servers.py- All MCP serverstest_bridge_simple.py- Basic bridge functionalitytest_calculation_direct.py- Calculation server
src/
βββ agents/
β βββ langgraph-mcp-agent.ts # Main LangGraph agent
βββ core/
β βββ config.ts # Configuration management
β βββ session-manager.ts # Session and context management
βββ llm/
β βββ inference-net-llm.ts # Inference.net LLM integration
β βββ mock-llm.ts # Mock LLM for testing
β βββ rate-limited-llm.ts # Rate-limited LLM wrapper
βββ mcp/
β βββ http-mcp-client.ts # HTTP MCP client
βββ types/
βββ index.ts # TypeScript type definitions
tests/integration/ # Integration tests
mcp_servers/ # Python MCP servers
full_mcp_bridge.py # HTTP bridge to Python servers
- Create a new MCP server in
mcp_servers/ - Register it in
full_mcp_bridge.py - The agent automatically discovers new tools
- Extend the base LLM class in
src/llm/ - Implement the required interface methods
- Add rate limiting if needed
# Run all TypeScript tests
npm test
# Run specific test
npx ts-node tests/integration/test_langgraph_mcp_agent.ts
# Run Python MCP tests
python tests/integration/test_all_servers.py- Simple Input - Just provide the task, let LangGraph handle the workflow
- MCP Integration - Leverage Model Context Protocol for tool connectivity
- TypeScript First - Full type safety and modern development
- Session Management - Maintain context across conversations
- Rate Limiting - Handle API limits gracefully
β
Working: LangGraph MCP Agent with 39 tools
β
Working: Session management and conversation continuity
β
Working: Rate-limited LLM integration
β
Working: HTTP MCP bridge to Python servers
β
Working: TypeScript type safety and modern tooling
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE file for details.
AIβShowmaker now features a revolutionary Enhanced BestβFirst Search (BFS) agent with failure-aware planning and rich context memory. This system learns from failures, adapts approaches automatically, and eliminates infinite loops while handling complex multi-component tasks.
- Failure-Aware Planning: Detects known failure patterns and adapts plans automatically
- Rich Context Memory: Multi-layered memory system with evidence tracking and semantic indexing
- Validator Integration: Evidence-based validation with confidence scoring
- Adaptive Learning: Learns from failures and builds knowledge for future tasks
- Constraint Awareness: Automatically works within server limitations and permissions
Before: Agents would get stuck in infinite loops, repeating the same failed approaches Now: System detects failures, adapts plans, and completes complex tasks efficiently
Example:
- β Old: 20+ iterations trying
systemctlcommands (admin blocked) - β
New: 3 iterations, adapts to
python -m http.server(user-level)
- Enhanced BFS Agent (
src/agents/enhanced-best-first-search-agent-with-memory-bank.ts) - Rich Memory Manager (
src/core/memory/rich-memory-manager.ts) - Validator Agent (
src/agents/validator-agent.ts) - File Registry (
src/core/memory/file-registry.ts) - Code Documentation (
src/core/memory/code-documentation.ts) - Loop Prevention (
src/core/memory/rich-loop-prevention.ts)
Simple Tasks:
- β "What is 2+2?" β Creates answer file, validates completion
- β "Solve LeetCode 1234" β Generates working code with tests
Complex Tasks:
- β "Develop a webapp on remote server" β Creates Flask app, adapts to constraints
- β "Create real-time data analytics dashboard" β Builds 4-component system (ingestion, processing, visualization, API)
// System detects failure risk and adapts automatically
β οΈ High failure risk: Path traversal detected - trying to write to system directory
β
Adapted plan: Using workspace directory instead of system directory
β οΈ High failure risk: Administrative command detected - systemctl operations not allowed
β
Adapted plan: Using Python HTTP server instead of systemctl- Short-Term Buffer: Real-time task context and execution history
- Long-Term Records: Persistent learning from successful patterns
- Knowledge Graph: Entity relationships and semantic connections
- Semantic Index: Embedding-based similarity search
- Evidence Tracking: File creation, code implementation, synthesis detection
Environment variables
BFS_VALIDATOR_MODE(default:action):action|periodic|bothBFS_VALUE_TRIGGER(default:0.8): Value threshold to schedule synthesize/validateBFS_VALIDATION_COOLDOWN(default:2): Iterations between validationsBFS_VALIDATOR_CONF(default:0.7): Minimum validator confidence to accept completionBFS_HINT_BOOST(default:0.35): Score boost for plans matching validator suggestionsBFS_SPECIAL_HINT_BOOST(default:0.1): Extra boost forimplement_code/test_exampleBFS_EXPLAIN_MAX(default:400): Max chars for inline explanationsBFS_EXPLAIN_LOG_MAX(default:0): Max chars printed for[BFS] explain:(0 = no truncation)
Logs to expect
[BFS] act:chosen action and tool[BFS] explain:inline, evidenceβgrounded step summary[BFS] schedule:injection of synthesize/validate (or test_example when tests are required)[BFS] draft:draft meta summary (code/lang, tests presence/cases, ops checks snippet)[BFS][validator]completion/confidence, rationale, and hints
Acceptance policies (validator)
- Code tasks: Require code + selfβtests (JSON cases + short walkthrough). Real execution is optional.
- Dev/Ops tasks: Require operational commands and verification steps; validator rejects highβlevel summaries without checks.
Where it lives
- Main agent:
src/agents/best-first-search-agent.ts - Validator:
src/agents/validator-agent.ts
- Prepare environment
- Start the MCP HTTP bridge (Python):
python full_mcp_bridge.py
- Create a
.envwith at least:OPENAI_KEY=sk-... MCP_HTTP_BASE=http://127.0.0.1:3310/api/bridge BFS_VALIDATOR_MODE=action BFS_VALUE_TRIGGER=0.8 BFS_VALIDATION_COOLDOWN=2 BFS_VALIDATOR_CONF=0.7 BFS_HINT_BOOST=0.35 BFS_SPECIAL_HINT_BOOST=0.1 BFS_EXPLAIN_LOG_MAX=0
- Install & run
npm install
npm run monitor:ui # Start monitoring UI
# Or run directly: npm run dev- Try sample queries
- Simple:
what is 2+2- Expect: Creates answer file, validates completion in 1 iteration
- Coding:
solve leetcode 1234- Expect: Generates working code with tests, validates implementation
- Complex:
develop a webapp on remote server amazon linux- Expect: Adapts to constraints, creates Flask app, validates setup
- Advanced:
Create a real-time data analytics dashboard that processes streaming data, performs statistical analysis, visualizes trends, and provides interactive insights- Expect: Builds 4-component system (ingestion, processing, visualization, API)
- Observe the magic
β οΈ High failure risk detectedβ System identifies potential problemsβ Adapted planβ System automatically fixes the approach[EnhancedBFS-Memory] β GOAL STATE REACHEDβ Task completed successfully- No more infinite loops! π