This guide explains how to deploy a LangGraph agent. There are two options:
The simplest and fastest way to deploy a LangGraph agent is through LangGraph Cloud deployment.
You can learn more in LangGraph official documentation:
To deploy your agent, simply take these steps:
-
Keep your API keys in a safe space and delete the
.envfile. Then push local changes to GitHub.Important: Never commit your API keys to production.
-
In LangSmith, click Deployments in the left menu, then click New Deployment.
-
Connect your GitHub account and select the repository with the project.
-
Enter the deployment name.
-
Add your
OPENAI_API_KEYas an environment variable. -
Click Submit at the top and wait for your deployment.
-
If everything is fine, you'll see the Currently deployed status. In the right panel, under API URL, you'll also find your agent's URL.
-
Now you can interact with your agent:
- Use you agent's API URL as the base URL for the calls.
- The assistant will have the same ID as locally:
fe096781-5601-53d2-b2f6-0d3403f7e9ca. - All production API calls require the
x-api-keyheader for authorization. In this header, pass your LangSmith API key.
Once your agent is deployed, you can test it by running Create Run, Wait for Output.
POST AGENT_API_URL:2024/runs/wait
Headers: Content-Type: application/json, x-api-key: LANGSMITH_API_KEY
Body:
{
"assistant_id": "fe096781-5601-53d2-b2f6-0d3403f7e9ca",
"input": {
"messages": [
{
"role": "user",
"content": "What can you do?"
}
]
}
}curl AGENT_API_URL/runs/wait \
--request POST \
--header 'Content-Type: application/json' \
--header 'x-api-key: LANGSMITH_API_KEY' \
--data '{
"assistant_id": "fe096781-5601-53d2-b2f6-0d3403f7e9ca",
"input": {
"messages": [
{
"role": "user",
"content": "What can you do?"
}
]
}
}'If you Agent is in Python, you can also try A2A Post.
POST AGENT_API_URL/a2a/fe096781-5601-53d2-b2f6-0d3403f7e9ca
Headers: Accept: application/json, Content-Type: application/json, x-api-key: LANGSMITH_API_KEY
Body:
{
"jsonrpc": "2.0",
"id": "",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [
{
"kind": "text",
"text": "What can you do?"
}
],
"messageId": ""
},
"thread": {
"threadId": ""
}
}
}curl AGENT_API_URL/a2a/fe096781-5601-53d2-b2f6-0d3403f7e9ca \
--request POST \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'x-api-key: LANGSMITH_API_KEY' \
--data '{
"jsonrpc": "2.0",
"id": "",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [
{
"kind": "text",
"text": "What can you do?"
}
],
"messageId": ""
},
"thread": {
"threadId": ""
}
}
}'You can also self-host the LangGraph runtime using any hosting stack. While this setup doesn't include LangSmith, the LangGraph Server API remains fully accessible.
For a simple example walkthrough, check out this video tutorial: Deploy ANY Langgraph AI Agent in Minutes!
Here are the main steps to take:
-
Create a Dockerfile.
-
Keep your API keys in a safe space and delete the
.envfile. Then push local changes to GitHub.Important: Never commit your API keys to production.
-
Get environment variables for your Postgres database and key-value storage.
-
Set up a hosting:
- Add environment variables
- Set up a key-value storage
- Connect to your GitHub project
-
Deploy. Your agent and LangGraph API endpoints will be available on a public URL.