Skip to content

phenobarbital/parrot-a2a-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Parrot A2A Server

This repository provides the setup to run an AI-Parrot agent as an Agent-to-Agent (A2A) server. The A2A protocol allows agents to seamlessly discover and communicate with each other over HTTP.

Installation & Environment Setup

Follow these steps to set up the environment using uv:

  1. Install uv (if not already installed):

    curl -LsSf https://astral.sh/uv/install.sh | sh
  2. Create a virtual environment:

    uv venv --python 3.11 .venv
    source .venv/bin/activate
  3. Install dependencies: Run uv sync to install the required packages from pyproject.toml.

    uv sync
  4. Prepare the project structure: Create the required configuration and environment folders:

    # Create the environment directory
    mkdir env
    
    # Initialize NavConfig project structure
    kardex create
    
    # Create empty environment file
    touch env/.env
    
    # Create a folder for templates
    mkdir templates/

Creating Your A2A Server

Once the environment is ready, you can configure and run your agent. The setup primarily involves defining your agent, wrapping it in an A2A Server, and mounting it to an aiohttp web application via a main.py entrypoint.

1. Define an Agent

Create an Agent instance with the required role, goal, language model, and tools.

2. Define the A2A Server

Take the configured agent and wrap it inside the A2AServer class.

3. Create main.py

To bring it all together, define the aiohttp app, set up the A2A routes, and launch it.

Example

Here is a summary example of what your main.py (or other startup script) should look like:

from aiohttp import web
from parrot.bots import Agent
from parrot.a2a import A2AServer

# Create your agent as usual (import tools from other package)
agent = Agent(
    name="CustomerSupport",
    llm="anthropic:claude-sonnet-4-20250514",
    tools=[QueryCustomersTool(), CreateTicketTool()]
)
await agent.configure()

# Wrap it as A2A service
a2a = A2AServer(agent)

# Mount on your aiohttp app
app = web.Application()
a2a.setup(app)

With this setup, the Agent is now accessible at the following routes:

  • GET /.well-known/agent.json (Discovery)
  • POST /a2a/message/send (Send message)
  • POST /a2a/message/stream (Streaming)
  • GET /a2a/tasks/{id} (Get task)

Running the Server with Gunicorn

To run your A2A server in a robust, production-ready environment, it is highly recommended to use gunicorn combined with the provided gunicorn.conf.py configuration.

  1. Ensure your main.py defines the aiohttp application as a module-level variable (e.g., app) or as an application factory.
  2. Formulate the startup command using the uv environment. Make sure to activate the virtual environment first:
    source .venv/bin/activate
    gunicorn main:app -c gunicorn.conf.py

The gunicorn.conf.py is pre-configured to use the aiohttp.GunicornWebWorker worker class, guaranteeing proper async network handling, and automatically scales the number of workers based on your available CPU cores.

Interacting With the A2A Server

Discovery (Well-Known Card)

The /.well-known/agent.json endpoint exposes the agent's identity, capabilities, descriptions, and supported message formats. Other agents can query this endpoint to automatically discover how to interact with your agent safely.

Interacting with Tasks

When a complex operation starts, the server may yield a task ID. You can poll the status of this operation by performing a GET request to /a2a/tasks/{id}. This provides the current state, progress, and results of the background execution without blocking the main workflow.

Sending Messages

You can communicate with the agent dynamically:

  • Synchronous: Send a payload to POST /a2a/message/send to receive a complete response block.
  • Streaming: For real-time updates and piece-by-piece output (useful for UI or instant-agent-feedback), use POST /a2a/message/stream.

Security Models

The A2A protocol implies that not all agents should have unrestrained access to your agent. Ensure you integrate proper security models:

  • Authentication: Bind the endpoints with middleware (e.g., JWT, OAuth, or API Keys) so only trusted callers can post messages or retrieve sensitive tasks.
  • Network Boundaries: If deploying internally, consider isolating A2A servers within a VPC to limit exposure.

🤝 Community & Support


Built with ❤️ by the AI-Parrot Team

About

Starts an AI Agent as A2A using Parrot

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors