LangGraph Tutorial (Python): deploying to AWS Lambda for beginners
This tutorial shows you how to package a small LangGraph app as an AWS Lambda handler and deploy it behind API Gateway. You need this when you want your graph-driven agent to run as a serverless HTTP endpoint instead of a long-lived Python process.
What You'll Need
- •Python 3.11 installed locally
- •An AWS account with permission to create:
- •Lambda functions
- •IAM roles
- •API Gateway HTTP APIs
- •AWS CLI configured locally with
aws configure - •
pipandvenv - •A LangGraph-compatible LLM provider key, for example:
- •
OPENAI_API_KEY
- •
- •Python packages:
- •
langgraph - •
langchain-openai - •
boto3
- •
- •Basic familiarity with:
- •LangGraph state graphs
- •AWS Lambda handler signatures
- •ZIP-based Lambda deployments
Step-by-Step
- •Start with a minimal graph that takes a user message and returns a model response. Keep the graph small; Lambda is easiest when your runtime is predictable and stateless.
from typing import TypedDict, Annotated
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def chat_node(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph_builder = StateGraph(State)
graph_builder.add_node("chat", chat_node)
graph_builder.set_entry_point("chat")
graph_builder.add_edge("chat", END)
app = graph_builder.compile()
- •Add a Lambda handler that accepts API Gateway events, extracts the prompt, runs the graph, and returns JSON. This keeps the deployment boundary simple: HTTP in, JSON out.
import json
def lambda_handler(event, context):
body = json.loads(event.get("body") or "{}")
prompt = body.get("prompt", "Hello")
result = app.invoke({"messages": [("user", prompt)]})
answer = result["messages"][-1].content
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({"answer": answer}),
}
- •Put the code into a single file named
app.py, then create a local virtual environment and install dependencies into a deployment folder. For Lambda ZIP deployments, you want the code and all site-packages in one bundle.
mkdir langgraph-lambda
cd langgraph-lambda
python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install langgraph langchain-openai boto3 -t package
cp app.py package/
cd package
zip -r ../lambda.zip .
cd ..
- •Set your API key as an environment variable in Lambda, then create an execution role with basic logging permissions. Lambda needs permission to write logs to CloudWatch even if your graph itself only calls external APIs.
aws iam create-role \
--role-name langgraph-lambda-role \
--assume-role-policy-document '{
"Version":"2012-10-17",
"Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
}'
aws iam attach-role-policy \
--role-name langgraph-lambda-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
ROLE_ARN=$(aws iam get-role --role-name langgraph-lambda-role --query 'Role.Arn' --output text)
- •Create the Lambda function from the ZIP file and point it at your handler. Then expose it through an HTTP API so you can call it from curl or any client.
aws lambda create-function \
--function-name langgraph-chat \
--runtime python3.11 \
--handler app.lambda_handler \
--role "$ROLE_ARN" \
--zip-file fileb://lambda.zip \
--timeout 30 \
--memory-size 512 \
--environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
API_ID=$(aws apigatewayv2 create-api \
--name langgraph-chat-api \
--protocol-type HTTP \
--query 'ApiId' --output text)
- •Connect API Gateway to Lambda and test it end-to-end with a POST request. This is the point where you confirm that event parsing, graph execution, and response serialization all work together.
LAMBDA_ARN=$(aws lambda get-function \
--function-name langgraph-chat \
--query 'Configuration.FunctionArn' --output text)
INTEGRATION_ID=$(aws apigatewayv2 create-integration \
--api-id "$API_ID" \
--integration-type AWS_PROXY \
--integration-uri "$LAMBDA_ARN" \
--payload-format-version "2.0" \
--query 'IntegrationId' --output text)
aws apigatewayv2 create-route \
--api-id "$API_ID" \
--route-key "POST /chat" \
--target "integrations/$INTEGRATION_ID"
Testing It
Deploying successfully is not enough; you want one clean request path from API Gateway into Lambda and back out again. Use the invoke URL from API Gateway and send a JSON body like {"prompt":"Write one sentence about Lambdas"}.
If everything is wired correctly, you should get back a JSON response with an "answer" field containing the model output. Check CloudWatch logs if you see timeouts, missing environment variables, or import errors from packaging.
A good smoke test is to call the endpoint twice with different prompts and verify that each request is independent. That confirms your graph is stateless across invocations, which is what you want for serverless agents.
Next Steps
- •Add structured state to your LangGraph app so you can track conversation history safely across turns.
- •Move secrets into AWS Secrets Manager instead of plain environment variables.
- •Add retries and timeout handling around model calls for production resilience.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit