LangChain Tutorial (TypeScript): deploying to AWS Lambda for intermediate developers
This tutorial shows how to build a small LangChain TypeScript function and deploy it to AWS Lambda with a clean handler, minimal dependencies, and a deployment shape that works in production. You need this when you want an LLM-powered API endpoint without running a server, paying for idle compute, or managing long-lived infrastructure.
What You'll Need
- •Node.js 20+
- •AWS account with permission to create:
- •Lambda
- •IAM roles
- •CloudWatch Logs
- •An OpenAI API key
- •A TypeScript project initialized with:
- •
typescript - •
@types/node - •
esbuild - •
@aws-sdk/client-lambdanot needed for this tutorial, but useful later
- •
- •LangChain packages:
- •
langchain - •
@langchain/openai
- •
- •AWS CLI configured locally
- •Basic familiarity with:
- •async/await
- •Lambda handlers
- •environment variables
Step-by-Step
- •Start by creating a minimal project and installing the packages you actually need. Keep the dependency list tight; Lambda cold starts get worse when you drag in unnecessary modules.
mkdir langchain-lambda-ts
cd langchain-lambda-ts
npm init -y
npm install langchain @langchain/openai dotenv
npm install -D typescript @types/node esbuild
npx tsc --init --rootDir src --outDir dist --module commonjs --target es2022 --moduleResolution node --esModuleInterop true
mkdir src
- •Create the Lambda handler in TypeScript. This example uses
ChatOpenAIfrom LangChain and returns a plain JSON response that API Gateway can proxy directly.
// src/handler.ts
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "langchain/schema";
export const handler = async () => {
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
temperature: 0,
});
const result = await model.invoke([
new HumanMessage("Write one sentence explaining what AWS Lambda is."),
]);
return {
statusCode: 200,
headers: { "content-type": "application/json" },
body: JSON.stringify({
answer: result.content,
}),
};
};
- •Add a build script that bundles the handler into one file for Lambda. Bundling matters because Lambda deployment is simpler when you ship one artifact instead of a full
node_modulestree.
{
"name": "langchain-lambda-ts",
"version": "1.0.0",
"main": "dist/handler.js",
"scripts": {
"build": "esbuild src/handler.ts --bundle --platform=node --target=node20 --outfile=dist/handler.js",
"typecheck": "tsc --noEmit"
}
}
- •Build locally and test the function before touching AWS. If it fails here, don’t waste time debugging IAM or runtime issues yet.
export OPENAI_API_KEY="your-openai-api-key"
npm run build
node -e "
process.env.OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const { handler } = require('./dist/handler.js');
handler().then(console.log).catch(console.error);
"
- •Create an IAM role for Lambda and deploy the bundle. The role only needs basic CloudWatch logging permissions for this example.
aws iam create-role \
--role-name langchainLambdaRole \
--assume-role-policy-document '{
"Version":"2012-10-17",
"Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
}'
aws iam attach-role-policy \
--role-name langchainLambdaRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
ROLE_ARN=$(aws iam get-role \
--role-name langchainLambdaRole \
--query 'Role.Arn' \
--output text)
- •Create the Lambda function and set the OpenAI key as an environment variable. Use Node.js 20 so your bundled code matches the runtime you tested locally.
zip -j function.zip dist/handler.js
aws lambda create-function \
--function-name langchain-ts-demo \
--runtime nodejs20.x \
--handler handler.handler \
--role "$ROLE_ARN" \
--zip-file fileb://function.zip \
--timeout 30 \
--memory-size 512 \
--environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
Testing It
Invoke the function directly from the CLI first. That gives you the raw response without API Gateway in the middle.
aws lambda invoke \
--function-name langchain-ts-demo \
response.json
cat response.json
You should see a statusCode of 200 and a JSON body containing the model output. If you get a timeout, check that your Lambda timeout is at least 30 seconds and that outbound network access is available through the default AWS-managed path.
If you see import errors, your bundle is wrong; rebuild with esbuild and confirm you are importing from @langchain/openai and langchain/schema. If you see auth errors, verify that OPENAI_API_KEY is set on the Lambda function, not just in your local shell.
Next Steps
- •Add API Gateway so this becomes an HTTP endpoint instead of a direct Lambda invocation.
- •Replace the hardcoded prompt with request input parsing from
event.body. - •Add structured output with Zod so downstream systems can consume stable JSON instead of free-form text.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit