CrewAI Tutorial (Python): deploying to AWS Lambda for intermediate developers
This tutorial shows you how to package a basic CrewAI workflow as an AWS Lambda function and invoke it through a handler that returns JSON. You need this when you want an agent workflow to run on demand, behind an API, or on a schedule without managing servers.
What You'll Need
- •Python 3.11 locally
- •An AWS account with permission to create:
- •Lambda functions
- •IAM roles
- •CloudWatch logs
- •AWS CLI configured locally
- •
crewai - •
crewai-tools - •
boto3 - •
python-dotenvfor local testing - •An LLM API key, such as:
- •
OPENAI_API_KEY
- •
- •A deployment package or container image strategy
- •Basic familiarity with:
- •CrewAI agents, tasks, crews
- •AWS Lambda handler structure
Step-by-Step
- •Start with a minimal CrewAI project structure that keeps your Lambda handler separate from your agent definitions. This makes local testing easier and keeps the Lambda entrypoint thin.
# app.py
import os
from crewai import Agent, Task, Crew, Process
from crewai.llm import LLM
def build_crew():
llm = LLM(model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])
researcher = Agent(
role="Researcher",
goal="Find concise answers",
backstory="You are careful and factual.",
llm=llm,
verbose=False,
)
task = Task(
description="Write a one-sentence summary of AWS Lambda for Python developers.",
expected_output="A concise summary.",
agent=researcher,
)
return Crew(agents=[researcher], tasks=[task], process=Process.sequential)
- •Add a Lambda handler that builds the crew at runtime and returns a JSON response. Keep the output small because Lambda responses should stay lightweight and deterministic.
# lambda_function.py
import json
from app import build_crew
def lambda_handler(event, context):
crew = build_crew()
result = crew.kickoff()
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"message": str(result),
"input": event,
}),
}
- •Create a local dependency file and install packages into a deployment folder. For Lambda, you want your runtime dependencies bundled explicitly instead of relying on your local machine state.
mkdir -p build
pip install --upgrade pip
pip install crewai crewai-tools boto3 python-dotenv -t build
cp app.py lambda_function.py build/
cd build
zip -r ../crewai-lambda.zip .
cd ..
- •Create the Lambda function in AWS with an IAM role that can write logs. If you already have a deployment pipeline, use the same handler name and runtime so the code path stays identical between local and cloud execution.
aws iam create-role \
--role-name crewai-lambda-role \
--assume-role-policy-document '{
"Version":"2012-10-17",
"Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
}'
aws iam attach-role-policy \
--role-name crewai-lambda-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
ROLE_ARN=$(aws iam get-role --role-name crewai-lambda-role --query 'Role.Arn' --output text)
aws lambda create-function \
--function-name crewai-demo \
--runtime python3.11 \
--handler lambda_function.lambda_handler \
--role "$ROLE_ARN" \
--zip-file fileb://crewai-lambda.zip \
--timeout 30 \
--memory-size 512
- •Set your environment variables in Lambda so the CrewAI LLM can authenticate. Do this in the console or through the CLI; without the key, the agent will fail before it can make any model call.
aws lambda update-function-configuration \
--function-name crewai-demo \
--environment "Variables={OPENAI_API_KEY=$OPENAI_API_KEY}"
- •Invoke the function and inspect the response payload. For production use, keep your task output structured so downstream systems can parse it without brittle string handling.
aws lambda invoke \
--function-name crewai-demo \
--payload '{"request_id":"12345","source":"cli"}' \
response.json
cat response.json
Testing It
Run the same code locally first by importing lambda_handler from a Python shell or test file and passing in a mock event dictionary. That catches packaging problems before you burn time debugging IAM or runtime issues in AWS.
In CloudWatch Logs, look for timeout errors, missing environment variables, or dependency import failures. If the function runs but returns empty or malformed output, reduce the scope of your task and make sure your model call is actually being reached.
For better reliability, test with a short prompt and a fixed model like gpt-4o-mini. Longer agent loops are more likely to hit Lambda timeout limits unless you increase memory and timeout deliberately.
Next Steps
- •Move from inline API keys to AWS Secrets Manager or SSM Parameter Store.
- •Replace raw string output with structured JSON using Pydantic models.
- •Add API Gateway in front of Lambda so external clients can call your CrewAI workflow over HTTP.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit