AutoGen Tutorial (Python): deploying to AWS Lambda for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogendeploying-to-aws-lambda-for-intermediate-developerspython

This tutorial shows you how to package a small AutoGen-based Python agent and deploy it as an AWS Lambda function. You’d use this when you want an LLM-backed workflow to run on-demand behind an API, without managing servers.

What You'll Need

  • Python 3.11
  • AWS account with permission to create:
    • Lambda functions
    • IAM roles
    • CloudWatch Logs
  • AWS CLI configured locally
  • pip, venv, and zip
  • An OpenAI API key exported as OPENAI_API_KEY
  • These Python packages:
    • pyautogen
    • openai
  • Basic familiarity with AutoGen’s AssistantAgent and UserProxyAgent

Step-by-Step

  1. Start by creating a clean project and installing the dependencies into a local folder that Lambda can ship. The important part here is that Lambda needs a zip file with your code plus all third-party packages.
mkdir autogen-lambda-demo
cd autogen-lambda-demo

python3.11 -m venv .venv
source .venv/bin/activate

pip install --upgrade pip
pip install pyautogen openai -t package/
  1. Create the Lambda handler. This example runs a simple AutoGen conversation where the assistant answers a user prompt using OpenAI. The handler reads the prompt from the event payload and returns the assistant’s final message as JSON.
import json
import os
from autogen import AssistantAgent, UserProxyAgent

def lambda_handler(event, context):
    prompt = event.get("prompt", "Write a one-line summary of AWS Lambda for developers.")

    llm_config = {
        "config_list": [
            {
                "model": "gpt-4o-mini",
                "api_key": os.environ["OPENAI_API_KEY"],
            }
        ],
        "temperature": 0,
    }

    assistant = AssistantAgent(
        name="assistant",
        llm_config=llm_config,
    )

    user_proxy = UserProxyAgent(
        name="user_proxy",
        human_input_mode="NEVER",
        code_execution_config=False,
    )

    chat_result = user_proxy.initiate_chat(
        assistant,
        message=prompt,
    )

    return {
        "statusCode": 200,
        "body": json.dumps(
            {
                "prompt": prompt,
                "reply": chat_result.chat_history[-1]["content"],
            }
        ),
    }
  1. Add a small local test harness before deploying. This catches packaging mistakes early and confirms your AutoGen import path works in the same process Lambda will use.
if __name__ == "__main__":
    os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "")
    response = lambda_handler(
        {"prompt": "Explain why Lambda is useful for event-driven AI workflows."},
        None,
    )
    print(response)
  1. Package the function for Lambda. Copy your handler file into the deployment root, include installed dependencies, then zip everything together. Keep the handler filename simple; in this example we’ll use lambda_function.py.
cp lambda_function.py package/

cd package
zip -r ../autogen-lambda.zip .
cd ..
  1. Create an IAM role and deploy the function. Use Python 3.11 on Lambda, attach basic execution logging, and set your OpenAI key as an environment variable.
aws iam create-role \
  --role-name autogen-lambda-role \
  --assume-role-policy-document '{
    "Version":"2012-10-17",
    "Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
  }'

aws iam attach-role-policy \
  --role-name autogen-lambda-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

ROLE_ARN=$(aws iam get-role --role-name autogen-lambda-role --query 'Role.Arn' --output text)

aws lambda create-function \
  --function-name autogen-demo \
  --runtime python3.11 \
  --handler lambda_function.lambda_handler \
  --role "$ROLE_ARN" \
  --zip-file fileb://autogen-lambda.zip \
  --timeout 30 \
  --memory-size 512 \
  --environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
  1. Invoke it and inspect the response in CloudWatch Logs if something fails. If you get a timeout or import error, it’s usually either a missing dependency in the zip or a model/API key issue.
aws lambda invoke \
  --function-name autogen-demo \
  --payload '{"prompt":"List three reasons to use AutoGen inside Lambda."}' \
  response.json

cat response.json

Testing It

Run the module locally first with python lambda_function.py to verify your dependency install and handler logic before touching AWS. Then invoke the deployed function with aws lambda invoke using a real prompt payload.

If you see ImportModuleError, check that lambda_function.py is at the root of the zip and that package/ contains all installed libraries. If you see auth errors from OpenAI, confirm OPENAI_API_KEY is set both locally and in the Lambda environment configuration.

For runtime issues, open CloudWatch Logs for /aws/lambda/autogen-demo. In practice, most failures come from packaging mistakes, not AutoGen itself.

Next Steps

  • Move secrets out of plain environment variables and into AWS Secrets Manager.
  • Put API Gateway in front of Lambda so you can call this agent over HTTPS.
  • Add structured output parsing so your agent returns JSON instead of free-form text.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides