Haystack Tutorial (TypeScript): deploying to AWS Lambda for intermediate developers
This tutorial shows how to package a Haystack pipeline written in TypeScript and run it on AWS Lambda behind a simple handler. You’d use this when you want serverless inference or retrieval without managing a long-running Node service.
What You'll Need
- •Node.js 20+
- •An AWS account with permission to create:
- •Lambda functions
- •IAM roles
- •CloudWatch logs
- •AWS CLI configured locally
- •A TypeScript project with:
- •
typescript - •
ts-nodeortsxfor local runs - •
@aws-sdk/client-bedrock-runtimeif you call Bedrock models - •
@haystack-ai/coreand any Haystack component packages you use
- •
- •An LLM provider API key if your pipeline calls an external model instead of Bedrock
- •A bundler for Lambda, such as:
- •
esbuild - •
tsup
- •
- •Basic familiarity with:
- •Haystack pipelines
- •async/await in TypeScript
- •AWS Lambda handlers
Step-by-Step
- •Start with a minimal Haystack pipeline that can run locally and inside Lambda. Keep the pipeline stateless, because Lambda instances are ephemeral and may be reused across invocations.
import { Pipeline } from "@haystack-ai/core";
import { OpenAIChatGenerator } from "@haystack-ai/openai";
const llm = new OpenAIChatGenerator({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const pipeline = new Pipeline();
pipeline.addComponent("llm", llm);
export async function runPrompt(prompt: string): Promise<string> {
const result = await pipeline.run({
llm: {
messages: [{ role: "user", content: prompt }],
},
});
return result.llm.replies[0].content;
}
- •Add an AWS Lambda handler that validates input and returns JSON. Keep the handler thin; all the actual AI logic should live in a separate module so you can test it outside Lambda.
import type { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { runPrompt } from "./pipeline";
type Body = {
prompt?: unknown;
};
export const handler: APIGatewayProxyHandlerV2 = async (event) => {
const body: Body = event.body ? JSON.parse(event.body) : {};
const prompt = typeof body.prompt === "string" ? body.prompt.trim() : "";
if (!prompt) {
return {
statusCode: 400,
headers: { "content-type": "application/json" },
body: JSON.stringify({ error: "prompt is required" }),
};
}
const answer = await runPrompt(prompt);
return {
statusCode: 200,
headers: { "content-type": "application/json" },
body: JSON.stringify({ answer }),
};
};
- •Wire up your project for Lambda bundling. Use
esbuildso the deployment artifact is small and includes only what Lambda needs at runtime.
{
"name": "haystack-lambda",
"private": true,
"type": "module",
"scripts": {
"build": "esbuild src/handler.ts --bundle --platform=node --target=node20 --format=esm --outdir=dist",
"start": "node dist/handler.js"
},
"dependencies": {
"@haystack-ai/core": "^1.0.0",
"@haystack-ai/openai": "^1.0.0",
"@types/aws-lambda": "^8.10.147"
},
"devDependencies": {
"@types/node": "^22.0.0",
"esbuild": "^0.25.0",
"typescript": "^5.8.0"
}
}
- •Add a TypeScript config that matches Lambda’s Node runtime and emits ESM-friendly output. This avoids common deployment issues like module resolution mismatches and broken imports after bundling.
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "Bundler",
"strict": true,
"skipLibCheck": true,
"resolveJsonModule": true,
"types": ["node", "aws-lambda"],
"noEmit": true
},
"include": ["src/**/*.ts"]
}
- •Deploy the bundle to AWS Lambda and set environment variables for your model provider. If you’re using OpenAI, store the key in Lambda config; if you’re using Bedrock, attach the correct IAM permissions instead of an API key.
npm install
npm run build
zip -r function.zip dist package.json node_modules
aws lambda create-function \
--function-name haystack-ts-lambda \
--runtime nodejs20.x \
--handler handler.handler \
--zip-file fileb://function.zip \
--role arn:aws:iam::123456789012:role/lambda-execution-role \
--environment Variables="{OPENAI_API_KEY=your-key-here}"
Testing It
Invoke the function with a simple JSON payload and confirm you get a structured response back, not plain text or an unhandled exception. Check CloudWatch logs if the function times out or returns a module load error, because those usually mean your bundle format or runtime target is wrong.
aws lambda invoke \
--function-name haystack-ts-lambda \
--payload '{"prompt":"Write one sentence about AWS Lambda"}' \
response.json
cat response.json
If the output is empty or malformed, inspect three things first:
- •The handler name matches the exported function exactly.
- •Your environment variable exists in Lambda.
- •The bundled file contains your Haystack code and dependencies.
For local debugging, run the built file directly with mock input before deploying again.
import { runPrompt } from "./pipeline.js";
const answer = await runPrompt("Explain what Lambda cold starts are.");
console.log(answer);
Next Steps
- •Add retrieval by connecting a document store before the generator, then pass retrieved context into the prompt.
- •Move secrets into AWS Secrets Manager instead of plain environment variables.
- •Put API Gateway in front of Lambda so you can expose this as an HTTP endpoint with auth and throttling.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit