CrewAI Tutorial (Python): deploying to AWS Lambda for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaideploying-to-aws-lambda-for-advanced-developerspython

This tutorial shows how to package a CrewAI Python app for AWS Lambda, invoke it through a Lambda handler, and keep it deployable without dragging in local-only assumptions. You need this when you want your agent workflow behind an API, triggered by events, or run on demand inside AWS instead of a long-lived server.

What You'll Need

  • Python 3.11
  • AWS account with permission to create:
    • Lambda function
    • IAM role
    • CloudWatch logs
  • crewai
  • crewai-tools
  • boto3 only if you plan to invoke Lambda from Python locally
  • An LLM API key, such as:
    • OPENAI_API_KEY
  • A basic CrewAI setup with:
    • at least one Agent
    • at least one Task
    • one Crew
  • AWS CLI configured locally if you want to deploy from your machine

Step-by-Step

  1. Start with a minimal CrewAI workflow that can run inside a single function call. Keep the agent instructions tight and avoid any dependency on web servers, background threads, or notebooks.
import os
from crewai import Agent, Task, Crew, Process

def build_crew():
    analyst = Agent(
        role="Research Analyst",
        goal="Summarize the input clearly",
        backstory="You turn raw text into concise operational summaries.",
        verbose=False,
    )

    task = Task(
        description="Summarize this request: {request}",
        expected_output="A short summary with key points.",
        agent=analyst,
    )

    return Crew(
        agents=[analyst],
        tasks=[task],
        process=Process.sequential,
        verbose=False,
    )
  1. Add a Lambda handler that accepts API Gateway-style events and returns JSON. This is the boundary you deploy; everything else stays in plain Python functions.
import json

def lambda_handler(event, context):
    body = event.get("body") or "{}"
    payload = json.loads(body) if isinstance(body, str) else body

    request_text = payload.get("request", "No request provided")
    crew = build_crew()

    result = crew.kickoff(inputs={"request": request_text})

    return {
        "statusCode": 200,
        "headers": {"Content-Type": "application/json"},
        "body": json.dumps({
            "result": str(result),
            "request": request_text,
        }),
    }
  1. Make the code production-safe for Lambda by reading secrets from environment variables and failing fast if they are missing. Lambda is stateless, so do not rely on local config files or interactive auth flows.
import os

def validate_env():
    api_key = os.getenv("OPENAI_API_KEY")
    if not api_key:
        raise RuntimeError("Missing OPENAI_API_KEY environment variable")

    os.environ["OPENAI_API_KEY"] = api_key

validate_env()
  1. Package dependencies into a Lambda deployment artifact. The simplest reliable path is to install everything into a build directory and zip that directory together with your handler file.
mkdir -p build
pip install --upgrade pip
pip install crewai crewai-tools -t build

cp app.py build/
cd build
zip -r ../crewai-lambda.zip .
cd ..
  1. Create the Lambda function and point it at your handler. Use Python 3.11 and give it enough memory and timeout for LLM calls; 30 seconds is usually too short for anything beyond trivial prompts.
aws lambda create-function \
  --function-name crewai-tutorial \
  --runtime python3.11 \
  --handler app.lambda_handler \
  --role arn:aws:iam::123456789012:role/lambda-execution-role \
  --timeout 60 \
  --memory-size 1024 \
  --zip-file fileb://crewai-lambda.zip \
  --environment Variables="{OPENAI_API_KEY=your-api-key}"
  1. Invoke the function with a test payload and inspect the response in CloudWatch if something fails. If you are using API Gateway later, keep the event shape compatible with the handler you wrote here.
aws lambda invoke \
  --function-name crewai-tutorial \
  --payload '{"body":"{\"request\":\"Summarize customer onboarding blockers\"}"}' \
  response.json

cat response.json

Testing It

Run a local smoke test first by importing lambda_handler directly and passing a mock event. That catches packaging mistakes before you pay for Lambda invocations.

In AWS, check the function logs in CloudWatch for import errors, missing environment variables, or model provider failures. If the function times out, increase memory first; Lambda CPU scales with memory, so this often helps more than just raising timeout.

If your output comes back as an object string instead of structured JSON, wrap the result in str(...) as shown above or normalize it before returning it. For API Gateway integrations, make sure the response has statusCode, headers, and stringified body.

Next Steps

  • Add a tool layer with crewai_tools for controlled access to internal APIs or S3.
  • Move secrets to AWS Secrets Manager instead of plain environment variables.
  • Add API Gateway or EventBridge so the Lambda can be triggered by HTTP requests or scheduled jobs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides