LangChain Tutorial (Python): deploying to AWS Lambda for beginners
This tutorial shows you how to package a small LangChain app as an AWS Lambda function and call it through API Gateway. You’d do this when you want a serverless endpoint for summarization, classification, or chat without running a persistent API server.
What You'll Need
- •Python 3.11 installed locally
- •An AWS account with permission to create:
- •Lambda functions
- •IAM roles
- •API Gateway HTTP APIs
- •AWS CLI configured locally (
aws configure) - •A valid OpenAI API key
- •These Python packages:
- •
langchain - •
langchain-openai - •
boto3is not required for the Lambda code here, but useful for AWS work
- •
- •A zip-based deployment workflow, or the AWS Console’s inline editor for quick tests
Step-by-Step
- •Start with a minimal LangChain chain that can run inside Lambda.
Keep the code small and dependency-light, because Lambda cold starts get worse as your package grows.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
def build_chain():
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise assistant."),
("user", "{question}")
])
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
temperature=0,
)
return prompt | llm
chain = build_chain()
- •Add the Lambda handler that accepts an HTTP event and returns JSON.
This version expects API Gateway proxy events, which is the simplest path for beginners.
import json
def lambda_handler(event, context):
body = json.loads(event.get("body") or "{}")
question = body.get("question", "What is LangChain?")
result = chain.invoke({"question": question})
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"answer": result.content,
}),
}
- •Put the code in a file named
lambda_function.pyand install dependencies into a local folder.
Lambda needs the packages bundled with your deployment artifact unless you use a layer.
mkdir -p lambda_pkg
pip install --target lambda_pkg langchain langchain-openai openai pydantic typing_extensions
cp lambda_function.py lambda_pkg/
cd lambda_pkg
zip -r ../langchain-lambda.zip .
cd ..
- •Create the Lambda function and set the OpenAI key as an environment variable.
Use an execution role with basic CloudWatch logging permissions so you can inspect failures.
aws iam create-role \
--role-name langchain-lambda-role \
--assume-role-policy-document '{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Principal":{"Service":"lambda.amazonaws.com"},
"Action":"sts:AssumeRole"
}]
}'
aws iam attach-role-policy \
--role-name langchain-lambda-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
ROLE_ARN=$(aws iam get-role --role-name langchain-lambda-role --query 'Role.Arn' --output text)
aws lambda create-function \
--function-name langchain-tutorial \
--runtime python3.11 \
--handler lambda_function.lambda_handler \
--role "$ROLE_ARN" \
--zip-file fileb://langchain-lambda.zip \
--environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
- •Add an HTTP API in front of the function so you can call it from curl or Postman.
This gives you a clean HTTPS endpoint without managing servers.
API_ID=$(aws apigatewayv2 create-api \
--name langchain-http-api \
--protocol-type HTTP \
--query 'ApiId' \
--output text)
INTEGRATION_ID=$(aws apigatewayv2 create-integration \
--api-id "$API_ID" \
--integration-type AWS_PROXY \
--integration-uri $(aws lambda get-function --function-name langchain-tutorial --query 'Configuration.FunctionArn' --output text) \
--payload-format-version "2.0" \
--query 'IntegrationId' \
--output text)
aws apigatewayv2 create-route \
--api-id "$API_ID" \
--route-key "POST /ask" \
--target "integrations/$INTEGRATION_ID"
- •Deploy the API and grant permission for API Gateway to invoke Lambda.
Without this permission, the route exists but requests will fail with authorization errors.
aws apigatewayv2 create-deployment --api-id "$API_ID"
aws lambda add-permission \
--function-name langchain-tutorial \
--statement-id apigw-invoke-permission \
--action lambda:InvokeFunction \
--principal apigateway.amazonaws.com \
--source-arn "arn:aws:execute-api:$(aws configure get region):$(aws sts get-caller-identity --query Account --output text):$API_ID/*/*/ask"
API_ENDPOINT=$(aws apigatewayv2 get-api \
--api-id "$API_ID" \
--query 'ApiEndpoint' \
--output text)
echo "$API_ENDPOINT/ask"
Testing It
Send a POST request with a JSON body containing question. If everything is wired correctly, you should get back a JSON response with an answer field containing model output.
curl -X POST "$API_ENDPOINT/ask" \
-H "Content-Type: application/json" \
-d '{"question":"Explain AWS Lambda in one sentence."}'
If it fails, check CloudWatch Logs for the Lambda function first. The most common issues are missing environment variables, incorrect package versions, or forgetting to add invoke permissions between API Gateway and Lambda.
Also verify that your deployment package includes all dependencies at the top level of the zip file, not nested inside another directory. If Python cannot import langchain_openai or langchain_core, your zip structure is wrong.
Next Steps
- •Move your OpenAI key into AWS Secrets Manager instead of an environment variable.
- •Add input validation and structured responses using Pydantic models.
- •Replace the simple prompt with a retrieval chain backed by S3, DynamoDB, or OpenSearch.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit