LangGraph Tutorial (TypeScript): deploying to AWS Lambda for advanced developers
This tutorial shows how to package a LangGraph TypeScript app as an AWS Lambda handler and expose it through API Gateway. You need this when your graph is already working locally, but you want a serverless deployment that can handle stateless request/response traffic without running a long-lived Node process.
What You'll Need
- •Node.js 20+
- •AWS account with permission to create:
- •Lambda
- •API Gateway HTTP API
- •CloudWatch logs
- •AWS CLI configured locally
- •TypeScript project initialized with
npmorpnpm - •Packages:
- •
@langchain/langgraph - •
@langchain/openai - •
@aws-sdk/client-lambdaonly if you plan to invoke other Lambdas from the graph - •
esbuild - •
typescript
- •
- •An OpenAI API key set as an environment variable in Lambda:
- •
OPENAI_API_KEY
- •
- •A deployment path:
- •either AWS SAM, CDK, or a ZIP-based upload
- •Basic familiarity with:
- •LangGraph state graphs
- •async/await in TypeScript
- •API Gateway event shapes
Step-by-Step
- •Start with a minimal graph that is safe for Lambda. Keep the graph stateless across invocations and return plain JSON so API Gateway can serialize it cleanly.
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const GraphState = Annotation.Root({
input: Annotation<string>(),
output: Annotation<string>(),
});
const workflow = new StateGraph(GraphState)
.addNode("generate", async (state) => {
const response = await llm.invoke(state.input);
return { output: response.content.toString() };
})
.addEdge("__start__", "generate")
.addEdge("generate", "__end__");
export const app = workflow.compile();
- •Add a Lambda handler that accepts API Gateway requests, extracts the prompt, runs the graph, and returns JSON. The important part is to keep the handler thin and let LangGraph do the orchestration.
import type { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { app } from "./graph";
type RequestBody = {
input?: string;
};
export const handler: APIGatewayProxyHandlerV2 = async (event) => {
const body: RequestBody = event.body ? JSON.parse(event.body) : {};
const input = body.input?.trim();
if (!input) {
return {
statusCode: 400,
headers: { "content-type": "application/json" },
body: JSON.stringify({ error: "Missing input" }),
};
}
const result = await app.invoke({ input });
return {
statusCode: 200,
headers: { "content-type": "application/json" },
body: JSON.stringify(result),
};
};
- •Build for Lambda using esbuild. This avoids shipping your whole TypeScript toolchain and keeps cold starts under control compared to a raw ts-node setup.
{
"name": "langgraph-lambda",
"private": true,
"type": "module",
"scripts": {
"build": "esbuild src/handler.ts --bundle --platform=node --target=node20 --format=esm --outfile=dist/index.mjs",
"zip": "cd dist && zip function.zip index.mjs"
},
"dependencies": {
"@langchain/langgraph": "^0.2.0",
"@langchain/openai": "^0.5.0"
},
"devDependencies": {
"@types/aws-lambda": "^8.10.147",
"esbuild": "^0.24.0",
"typescript": "^5.6.3"
}
}
- •Deploy the bundled artifact to Lambda and wire it to an HTTP API. Use Node.js 20 runtime, set the handler to
index.handler, and configureOPENAI_API_KEYin the function environment.
npm run build
cd dist && zip function.zip index.mjs
aws lambda create-function \
--function-name langgraph-ts-handler \
--runtime nodejs20.x \
--handler index.handler \
--role arn:aws:iam::123456789012:role/lambda-exec-role \
--zip-file fileb://function.zip \
--environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
aws apigatewayv2 create-api \
--name langgraph-api \
--protocol-type HTTP
aws apigatewayv2 create-integration \
--api-id ձեր_api_id \
- •Make sure your Lambda has enough timeout and memory for model calls, then test it end-to-end with curl. For LLM-backed graphs, start with at least a few seconds of timeout and increase if you add tools or multi-step nodes.
aws lambda update-function-configuration \
--function-name langgraph-ts-handler \
--timeout 30 \
--memory-size 1024
curl -X POST "$API_URL" \
-H "content-type: application/json" \
-d '{"input":"Write one sentence about Lambda cold starts."}'
Testing It
First test locally by running the bundled handler logic in a small harness before you deploy; that catches import issues and ESM/CommonJS mistakes early. Then hit the deployed HTTP endpoint with a simple prompt and confirm you get back a JSON payload containing the graph result.
Watch CloudWatch logs for two things: Lambda initialization time and OpenAI request latency. If you see timeouts or intermittent failures, increase memory first; on Lambda, more memory also buys you more CPU.
For production validation, send several concurrent requests and confirm each invocation is independent. If your graph starts holding conversation state, move that state into DynamoDB or another external store instead of relying on process memory.
Next Steps
- •Add streaming responses using LangGraph’s streaming APIs and API Gateway-compatible chunking patterns.
- •Persist thread state in DynamoDB so you can support resumable workflows across Lambda invocations.
- •Wrap tool calls in separate nodes with retries and idempotency keys for bank-grade reliability.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit