LangChain Tutorial (TypeScript): deploying to AWS Lambda for beginners

By Cyprian AaronsUpdated 2026-04-21
langchaindeploying-to-aws-lambda-for-beginnerstypescript

This tutorial shows you how to build a small LangChain TypeScript app and deploy it to AWS Lambda behind a simple handler. You need this when you want your chain to run as a serverless API without managing servers, especially for chat, summarization, or classification jobs.

What You'll Need

  • Node.js 18+ installed
  • AWS account with permission to create:
    • Lambda functions
    • IAM roles
    • CloudWatch logs
  • An OpenAI API key
  • A TypeScript project initialized with:
    • typescript
    • @types/node
  • These runtime packages:
    • langchain
    • @langchain/openai
    • @aws-sdk/client-lambda is not required for this tutorial, but useful later
  • AWS CLI configured locally:
    • aws configure
  • Basic familiarity with:
    • async/await
    • environment variables
    • JSON event payloads

Step-by-Step

  1. Create a minimal TypeScript project and install the dependencies. Keep the dependency list small so the Lambda bundle stays manageable.
mkdir langchain-lambda-tutorial
cd langchain-lambda-tutorial

npm init -y
npm install langchain @langchain/openai
npm install -D typescript @types/node @types/aws-lambda esbuild

npx tsc --init --rootDir src --outDir dist --module ESNext --target ES2022 \
  --moduleResolution NodeNext --esModuleInterop true --resolveJsonModule true \
  --strict true
mkdir src
  1. Add a LangChain function that takes input text and returns a model response. This example uses the current OpenAI chat model import from @langchain/openai, which is what you want for new code.
// src/chain.ts
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "langchain/messages";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4o-mini",
});

export async function runChain(input: string): Promise<string> {
  const response = await model.invoke([
    new HumanMessage(`Summarize this in one sentence: ${input}`),
  ]);

  return response.content.toString();
}
  1. Wrap that chain in an AWS Lambda handler. Lambda expects a plain exported function, so keep the interface simple and return JSON that API Gateway can forward directly.
// src/handler.ts
import type { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { runChain } from "./chain.js";

export const handler: APIGatewayProxyHandlerV2 = async (event) => {
  const body = event.body ? JSON.parse(event.body) : {};
  const input = body.input ?? "LangChain on AWS Lambda";

  const result = await runChain(String(input));

  return {
    statusCode: 200,
    headers: { "content-type": "application/json" },
    body: JSON.stringify({ result }),
  };
};
  1. Add build scripts and compile to JavaScript before deployment. Lambda runs the emitted JS, not your TypeScript source.
{
  "name": "langchain-lambda-tutorial",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "build": "esbuild src/handler.ts --bundle --platform=node --target=node18 --format=esm --outfile=dist/index.js",
    "start": "node dist/index.js"
  },
  "dependencies": {
    "@langchain/openai": "^0.6.0",
    "langchain": "^0.3.0"
  }
}
  1. Deploy the bundle to AWS Lambda and set the environment variable there. For beginners, the easiest path is zip deployment plus an HTTP API in API Gateway.
npm run build

cd dist
zip function.zip index.js
cd ..

aws lambda create-function \
  --function-name langchain-ts-lambda \
  --runtime nodejs18.x \
  --handler index.handler \
  --role arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_LAMBDA_ROLE \
  --zip-file fileb://dist/function.zip \
  --environment Variables="{OPENAI_API_KEY=your_openai_key}"

If you already created the function, update code instead of creating it again.

aws lambda update-function-code \
  --function-name langchain-ts-lambda \
  --zip-file fileb://dist/function.zip

aws lambda update-function-configuration \
  --function-name langchain-ts-lambda \
  --environment Variables="{OPENAI_API_KEY=your_openai_key}"
  1. Expose it through API Gateway or test it directly in Lambda first. Direct invocation is faster for debugging because you can see whether your handler, env vars, and model call are all working.
aws lambda invoke \
  --function-name langchain-ts-lambda \
  --payload '{"body":"{\"input\":\"AWS Lambda is good for short-lived workloads\"}"}' \
  response.json

cat response.json

Testing It

Start by invoking the function directly with the AWS CLI so you can isolate Lambda issues from API Gateway issues. If you get a valid JSON response with a result field, your bundle, handler name, and OpenAI key are all wired correctly.

If it fails, check CloudWatch logs first. The usual problems are missing OPENAI_API_KEY, wrong handler path like index.handler, or bundling mistakes caused by mixing CommonJS and ESM.

Once direct invocation works, connect API Gateway and send a POST request with an input field in JSON. At that point you have a real serverless LangChain endpoint that can be called from any frontend or backend service.

Next Steps

  • Add structured output with Zod so your Lambda returns typed JSON instead of plain text.
  • Replace the single prompt with a real chain using RunnableSequence for multi-step workflows.
  • Move secrets into AWS Secrets Manager instead of storing them as plain Lambda environment variables.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides