This is an example LangGraph Python agent using the OpenAI LLM to answer questions about cryptocurrencies. This agent is A2A-compatible.
You can run this code locally and extend it to build your own Agent, as shown in this guide:
This agent is a crypto-focused single-node chatbot that receives a message, calls an OpenAI model, and returns an assistant response.
In LangGraph, a graph is a map of how your agent thinks and acts. It defines what steps the agent can take and how those steps connect.
Each node is one of those steps—for example:
- calling an AI model
- fetching data from an API
- deciding what to do next
When you build a LangGraph agent, you're basically creating a small workflow made of these nodes. The graph handles how messages move between them—so your agent can reason, make calls, and respond in a structured way.
In this example, there is only one node, which calls OpenAI.
The agent logic is defined in src/agent/graph.py:
- The code imports the
langgraph.graphandlanggraph.runtimecomponents for building and running the agent. Contextis a class defining configurable parameters accessible to the runtime.Stateis a dataclass defining the agent's working memory (a list of message objects forming the conversation).call_modelis a node responsible for interacting with the LLM (gpt-4o-minifrom OpenAI). It receives the current conversation state and runtime context, sends the latest user message to the model, an updated message list that includes the assistant's response.graphdefines the graph.StateGraphdescribes the workflow, with one node (call_model) that runs as soon as the graph starts. Thecompile()function finalizes the workflow into an executable runtime graph.- The agent follows the A2A protocol, meaning it can receive and send structured conversational messages to other agents. The
StateandContextschemas make it interoperable with other A2A-compatible components