This repository provides the setup to run an AI-Parrot agent as an Agent-to-Agent (A2A) server. The A2A protocol allows agents to seamlessly discover and communicate with each other over HTTP.
Follow these steps to set up the environment using uv:
-
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh -
Create a virtual environment:
uv venv --python 3.11 .venv source .venv/bin/activate -
Install dependencies: Run
uv syncto install the required packages frompyproject.toml.uv sync
-
Prepare the project structure: Create the required configuration and environment folders:
# Create the environment directory mkdir env # Initialize NavConfig project structure kardex create # Create empty environment file touch env/.env # Create a folder for templates mkdir templates/
Once the environment is ready, you can configure and run your agent. The setup primarily involves defining your agent, wrapping it in an A2A Server, and mounting it to an aiohttp web application via a main.py entrypoint.
Create an Agent instance with the required role, goal, language model, and tools.
Take the configured agent and wrap it inside the A2AServer class.
To bring it all together, define the aiohttp app, set up the A2A routes, and launch it.
Here is a summary example of what your main.py (or other startup script) should look like:
from aiohttp import web
from parrot.bots import Agent
from parrot.a2a import A2AServer
# Create your agent as usual (import tools from other package)
agent = Agent(
name="CustomerSupport",
llm="anthropic:claude-sonnet-4-20250514",
tools=[QueryCustomersTool(), CreateTicketTool()]
)
await agent.configure()
# Wrap it as A2A service
a2a = A2AServer(agent)
# Mount on your aiohttp app
app = web.Application()
a2a.setup(app)With this setup, the Agent is now accessible at the following routes:
GET /.well-known/agent.json(Discovery)POST /a2a/message/send(Send message)POST /a2a/message/stream(Streaming)GET /a2a/tasks/{id}(Get task)
To run your A2A server in a robust, production-ready environment, it is highly recommended to use gunicorn combined with the provided gunicorn.conf.py configuration.
- Ensure your
main.pydefines theaiohttpapplication as a module-level variable (e.g.,app) or as an application factory. - Formulate the startup command using the
uvenvironment. Make sure to activate the virtual environment first:source .venv/bin/activate gunicorn main:app -c gunicorn.conf.py
The gunicorn.conf.py is pre-configured to use the aiohttp.GunicornWebWorker worker class, guaranteeing proper async network handling, and automatically scales the number of workers based on your available CPU cores.
The /.well-known/agent.json endpoint exposes the agent's identity, capabilities, descriptions, and supported message formats. Other agents can query this endpoint to automatically discover how to interact with your agent safely.
When a complex operation starts, the server may yield a task ID. You can poll the status of this operation by performing a GET request to /a2a/tasks/{id}. This provides the current state, progress, and results of the background execution without blocking the main workflow.
You can communicate with the agent dynamically:
- Synchronous: Send a payload to
POST /a2a/message/sendto receive a complete response block. - Streaming: For real-time updates and piece-by-piece output (useful for UI or instant-agent-feedback), use
POST /a2a/message/stream.
The A2A protocol implies that not all agents should have unrestrained access to your agent. Ensure you integrate proper security models:
- Authentication: Bind the endpoints with middleware (e.g., JWT, OAuth, or API Keys) so only trusted callers can post messages or retrieve sensitive tasks.
- Network Boundaries: If deploying internally, consider isolating A2A servers within a VPC to limit exposure.
- Issues: GitHub Tracker
- Discussion: GitHub Discussions
- Contribution: Pull requests are welcome! Please read
CONTRIBUTING.md.
Built with ❤️ by the AI-Parrot Team