LangGraph Tutorial (Python): deploying to AWS Lambda for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langgraphdeploying-to-aws-lambda-for-intermediate-developerspython

This tutorial shows you how to package a LangGraph app in Python and run it on AWS Lambda behind an API Gateway endpoint. You need this when your graph is small enough for serverless, but you still want a clean HTTP interface, repeatable deployments, and no always-on server to manage.

What You'll Need

  • Python 3.11
  • AWS account with permission to create:
    • Lambda functions
    • IAM roles
    • API Gateway HTTP APIs
    • CloudWatch logs
  • awscli configured locally
  • pip and venv
  • These Python packages:
    • langgraph
    • langchain-core
    • boto3
  • An LLM provider API key if your graph calls one
    • Example: OPENAI_API_KEY
  • Basic familiarity with:
    • LangGraph state graphs
    • Lambda handler functions
    • ZIP-based Lambda deployments

Step-by-Step

  1. Start with a minimal LangGraph app that can run locally and inside Lambda. Keep the graph simple: one node that transforms input text, then returns structured JSON.
from typing import TypedDict

from langgraph.graph import StateGraph, START, END


class GraphState(TypedDict):
    text: str
    result: str


def uppercase_node(state: GraphState) -> dict:
    return {"result": state["text"].upper()}


graph = StateGraph(GraphState)
graph.add_node("uppercase", uppercase_node)
graph.add_edge(START, "uppercase")
graph.add_edge("uppercase", END)

app = graph.compile()
  1. Add a Lambda handler that accepts API Gateway requests and invokes the compiled graph. The key detail is to keep the handler stateless and return a plain JSON response.
import json

from app import app


def lambda_handler(event, context):
    body = json.loads(event.get("body") or "{}")
    text = body.get("text", "")

    output = app.invoke({"text": text})

    return {
        "statusCode": 200,
        "headers": {"Content-Type": "application/json"},
        "body": json.dumps(output),
    }
  1. Package your dependencies into a deployment folder. For Lambda, you want everything in one directory so the runtime can import your modules without extra build logic.
mkdir -p build
python3.11 -m venv .venv
source .venv/bin/activate

pip install --upgrade pip
pip install langgraph langchain-core boto3 -t build

cp app.py lambda_function.py build/
cd build
zip -r ../langgraph-lambda.zip .
  1. Create the Lambda function with an IAM role that can write logs. Use the Python 3.11 runtime and point the handler at lambda_function.lambda_handler.
aws iam create-role \
  --role-name langgraph-lambda-role \
  --assume-role-policy-document '{
    "Version":"2012-10-17",
    "Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
  }'

aws iam attach-role-policy \
  --role-name langgraph-lambda-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

ROLE_ARN=$(aws iam get-role --role-name langgraph-lambda-role --query 'Role.Arn' --output text)

aws lambda create-function \
  --function-name langgraph-tutorial \
  --runtime python3.11 \
  --handler lambda_function.lambda_handler \
  --role "$ROLE_ARN" \
  --timeout 30 \
  --memory-size 512 \
  --zip-file fileb://langgraph-lambda.zip
  1. Expose the function through an HTTP API so you can call it from curl or your frontend. This keeps the deployment simple and avoids custom REST API setup unless you need it later.
API_ID=$(aws apigatewayv2 create-api \
  --name langgraph-http-api \
  --protocol-type HTTP \
  --target arn:aws:lambda:$(aws configure get region):$(aws sts get-caller-identity --query Account --output text):function:langgraph-tutorial \
  --query 'ApiId' \
  --output text)

aws lambda add-permission \
  --function-name langgraph-tutorial \
  --statement-id apigw-invoke \
  --action lambda:InvokeFunction \
  --principal apigateway.amazonaws.com \
  --source-arn "arn:aws:execute-api:$(aws configure get region):$(aws sts get-caller-identity --query Account --output text):$API_ID/*/*"

echo "https://$API_ID.execute-api.$(aws configure get region).amazonaws.com"
  1. Update the code when your graph needs real LLM calls. The deployment pattern stays the same; only the node implementation changes, and secrets should come from environment variables.
import os
from typing import TypedDict

from langchain_core.messages import HumanMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI


class GraphState(TypedDict):
    text: str
    result: str


llm = ChatOpenAI(model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])
prompt = ChatPromptTemplate.from_messages(
    [("system", "Rewrite this text for a bank customer support agent."), ("human", "{text}")]
)


def rewrite_node(state: GraphState) -> dict:
    chain = prompt | llm
    response = chain.invoke({"text": state["text"]})
    return {"result": response.content}

Testing It

Call the API Gateway endpoint with a simple JSON payload and confirm that Lambda returns transformed output:

curl -X POST "https://YOUR_API_ID.execute-api.YOUR_REGION.amazonaws.com" \
  -H "Content-Type: application/json" \
  -d '{"text":"hello lambda"}'

You should get back JSON like:

{"text":"hello lambda","result":"HELLO LAMBDA"}

If it fails, check CloudWatch logs for import errors first. In practice, most broken Lambda deploys come from missing packages in the ZIP or using local paths that never made it into build/.

Next Steps

  • Add persistent state with DynamoDB if your graph needs conversation history or audit trails.
  • Move packaging to AWS SAM or Terraform once manual ZIP deploys become annoying.
  • Replace the toy node with tool calling, then test cold starts and timeout behavior under real traffic.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides