AutoGen Tutorial (TypeScript): deploying to AWS Lambda for intermediate developers
This tutorial shows how to package an AutoGen TypeScript agent as an AWS Lambda function and invoke it through API Gateway. You’d use this when you want a lightweight, serverless agent endpoint without running a long-lived Node process.
What You'll Need
- •Node.js 20+
- •AWS account with permissions for:
- •Lambda
- •IAM roles
- •API Gateway
- •CloudWatch Logs
- •AWS CLI configured locally
- •An OpenAI API key
- •A TypeScript project with these packages:
- •
@autogenai/autogen - •
openai - •
zod - •
esbuild - •
typescript - •
@types/aws-lambda
- •
- •Basic familiarity with AutoGen agents and async/await
Step-by-Step
- •Create a minimal TypeScript project and install the dependencies. Keep the runtime simple: one Lambda handler, one agent, one model client.
mkdir autogen-lambda && cd autogen-lambda
npm init -y
npm i @autogenai/autogen openai zod
npm i -D typescript esbuild @types/aws-lambda @types/node
npx tsc --init --rootDir src --outDir dist --module nodenext --target es2022 --moduleResolution nodenext --esModuleInterop true
mkdir src
- •Build the agent in a way that works in Lambda. The key detail is to initialize the OpenAI client from environment variables and keep the handler stateless so each invocation is independent.
// src/agent.ts
import { AssistantAgent } from "@autogenai/autogen";
import OpenAI from "openai";
const modelClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export function createAgent() {
return new AssistantAgent({
name: "lambda_assistant",
modelClient,
systemMessage:
"You are a concise assistant that answers in plain English.",
});
}
- •Add the Lambda handler. This version accepts JSON input, runs one AutoGen turn, and returns a clean API Gateway response. If your payload is invalid, it fails fast with a useful error message.
// src/handler.ts
import type { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { createAgent } from "./agent.js";
type RequestBody = {
message?: string;
};
export const handler: APIGatewayProxyHandlerV2 = async (event) => {
try {
const body: RequestBody = event.body ? JSON.parse(event.body) : {};
if (!body.message) {
return {
statusCode: 400,
headers: { "content-type": "application/json" },
body: JSON.stringify({ error: "message is required" }),
};
}
const agent = createAgent();
const result = await agent.run(body.message);
return {
statusCode: 200,
headers: { "content-type": "application/json" },
body: JSON.stringify({
reply: result.messages.at(-1)?.content ?? "",
}),
};
} catch (err) {
console.error(err);
return {
statusCode: 500,
headers: { "content-type": "application/json" },
body: JSON.stringify({ error: "internal_server_error" }),
};
}
};
- •Add a build script that bundles everything into one file for Lambda. Esbuild keeps deployment simple because Lambda only needs the bundled output plus your environment variables.
// build.mjs
import esbuild from "esbuild";
await esbuild.build({
entryPoints: ["src/handler.ts"],
bundle: true,
platform: "node",
target: "node20",
outfile: "dist/index.js",
format: "cjs",
sourcemap: true,
});
- •Compile and package the function, then deploy it to Lambda. Use an execution role that can write logs, and set
OPENAI_API_KEYas a Lambda environment variable.
{
"name": "autogen-lambda",
"version": "1.0.0",
"type": "module",
"scripts": {
"build": "node build.mjs"
}
}
npm run build
aws iam create-role \
--role-name autogenLambdaRole \
--assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy \
--role-name autogenLambdaRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
aws lambda create-function \
--function-name autogen-typescript-agent \
--runtime nodejs20.x \
--handler index.handler \
--role arn:aws:iam::<YOUR_ACCOUNT_ID>:role/autogenLambdaRole \
--zip-file fileb://function.zip \
--environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
If you prefer ZIP packaging, archive dist/index.js as index.js at the root of the zip.
- •Put API Gateway in front of it so you can call it over HTTPS. For intermediate teams, this is usually the practical shape: clients send JSON, Lambda runs AutoGen, and the response comes back synchronously.
aws lambda add-permission \
--function-name autogen-typescript-agent \
--statement-id apigw-invoke \
--action lambda.InvokeFunction \
--principal apigateway.amazonaws.com
# Create an HTTP API in API Gateway and integrate it with the Lambda.
# Then send POST requests like this:
curl -X POST "$API_URL" \
-H 'content-type: application/json' \
-d '{"message":"Summarize why Lambda is useful for AI agents."}'
Testing It
Start by invoking the function directly in the AWS console or with the CLI so you can isolate Lambda issues from API Gateway issues. Check CloudWatch Logs for cold-start errors, missing environment variables, or import problems caused by bundling.
Then test the HTTP endpoint with a simple JSON body containing message. If you get a 400, your request shape is wrong; if you get a 500, inspect logs for OpenAI authentication or model-client initialization failures.
A good production test is to run three calls in a row:
- •one valid prompt
- •one empty payload
- •one malformed JSON payload
That tells you whether your validation and error handling are actually doing their job.
Next Steps
- •Add structured tool calling so the agent can query internal systems instead of only chatting.
- •Move secrets to AWS Secrets Manager instead of plain environment variables.
- •Add timeout controls and token limits so your Lambda stays inside cost and latency budgets.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit