How to Integrate OpenAI for insurance with AWS Lambda for AI agents

By Cyprian AaronsUpdated 2026-04-21
openai-for-insuranceaws-lambdaai-agents

Connecting OpenAI for insurance with AWS Lambda gives you a clean way to run insurance-specific AI logic without standing up a permanent service. Lambda handles the event-driven execution, while OpenAI handles claim triage, policy Q&A, document summarization, and agent assistance.

This pattern works well when your AI agent needs to react to inbound events like FNOL submissions, broker emails, policy changes, or claims attachments. You get low operational overhead, pay-per-use execution, and a simple path to production.

Prerequisites

  • An AWS account with permission to create and invoke Lambda functions
  • AWS CLI configured locally:
    • aws configure
  • Python 3.10+
  • boto3 installed for AWS SDK access
  • openai Python package installed
  • An OpenAI API key stored as an environment variable:
    • OPENAI_API_KEY
  • A Lambda execution role with permissions for:
    • CloudWatch Logs
    • Any downstream services you call from the function
  • If you are using insurance-specific OpenAI endpoints or models in your org setup:
    • The correct model name and endpoint configuration from your platform admin

Integration Steps

1) Install dependencies and define your Lambda handler

Keep the function small. Lambda should orchestrate the request, call OpenAI for the insurance task, then return structured output.

import json
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def lambda_handler(event, context):
    payload = event if isinstance(event, dict) else json.loads(event)
    claim_text = payload.get("claim_text", "")

    response = client.responses.create(
        model="gpt-4.1-mini",
        input=f"""
You are an insurance assistant.
Classify this claim into one of: auto_damage, property_damage, bodily_injury, fraud_risk, other.
Return JSON with fields: category, summary, next_action.

Claim:
{claim_text}
"""
    )

    return {
        "statusCode": 200,
        "body": response.output_text
    }

This is the core integration point. In production, pin the model you have approved for insurance workflows and keep outputs structured.

2) Add structured JSON output for agent workflows

For AI agents, free-form text is weak. Return machine-readable JSON so downstream steps can route claims, trigger reviews, or notify adjusters.

import json
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def classify_claim(claim_text: str) -> dict:
    response = client.responses.create(
        model="gpt-4.1-mini",
        input=[
            {
                "role": "system",
                "content": "You are an insurance claims triage assistant. Output valid JSON only."
            },
            {
                "role": "user",
                "content": f"""
Classify this claim and return JSON:
{{
  "category": "...",
  "summary": "...",
  "next_action": "..."
}}

Claim:
{claim_text}
"""
            }
        ]
    )

    return json.loads(response.output_text)

If your org uses a managed “OpenAI for insurance” deployment with custom policies or guardrails, this is where you keep the same interface but swap the model name or base client config.

3) Invoke the Lambda function from another service or agent

Your AI agent can call Lambda directly through AWS SDK. That keeps orchestration outside the function and lets you chain multiple tools.

import json
import boto3

lambda_client = boto3.client("lambda", region_name="us-east-1")

event = {
    "claim_text": (
        "Customer reports water damage in kitchen after pipe burst overnight. "
        "Photos attached. No injuries reported."
    )
}

response = lambda_client.invoke(
    FunctionName="insurance-claim-triage",
    InvocationType="RequestResponse",
    Payload=json.dumps(event).encode("utf-8")
)

result = json.loads(response["Payload"].read())
print(result["statusCode"])
print(result["body"])

This is the standard boto3.client("lambda").invoke(...) flow. Use it when an upstream agent needs synchronous results before deciding the next action.

4) Return a clean response for downstream automation

Make sure your Lambda returns consistent JSON so Step Functions, EventBridge consumers, or an agent controller can consume it reliably.

import json
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def lambda_handler(event, context):
    claim_text = event.get("claim_text", "")

    result = client.responses.create(
        model="gpt-4.1-mini",
        input=f"Summarize this insurance claim in one sentence and recommend next action:\n{claim_text}"
    )

    body = {
        "request_id": getattr(context, "aws_request_id", None),
        "analysis": result.output_text,
        "source": "openai-insurance-lambda"
    }

    return {
        "statusCode": 200,
        "headers": {"Content-Type": "application/json"},
        "body": json.dumps(body)
    }

That aws_request_id is useful for traceability when you need to audit decisions later.

5) Deploy to AWS Lambda

Package your code and deploy it with either zip upload or IaC. For quick validation:

zip function.zip lambda_function.py
aws lambda create-function \
  --function-name insurance-claim-triage \
  --runtime python3.11 \
  --handler lambda_function.lambda_handler \
  --role arn:aws:iam::123456789012:role/lambda-execution-role \
  --zip-file fileb://function.zip \
  --environment Variables="{OPENAI_API_KEY=your-key-here}"

For production, move the secret into AWS Secrets Manager or Systems Manager Parameter Store instead of plaintext environment variables.

Testing the Integration

Use a direct invoke test payload first. That tells you whether Lambda can call OpenAI and whether your response format is stable.

import json
import boto3

client = boto3.client("lambda", region_name="us-east-1")

test_event = {
    "claim_text": (
        "The insured hit a deer on Route 9 at night. Front bumper damaged. "
        "No police report filed."
    )
}

resp = client.invoke(
    FunctionName="insurance-claim-triage",
    InvocationType="RequestResponse",
    Payload=json.dumps(test_event).encode()
)

payload = json.loads(resp["Payload"].read())
print(payload)

Expected output:

{
  "statusCode": 200,
  "body": "{\"category\":\"auto_damage\",\"summary\":\"Vehicle collision with wildlife causing front bumper damage.\",\"next_action\":\"Route to auto claims review\"}"
}

If you get a timeout, check Lambda memory/timeout settings and confirm outbound internet access if your function runs inside a VPC.

Real-World Use Cases

  • FNOL triage
    • Classify first notice of loss submissions and route them to auto, property, or bodily injury queues.
  • Policy servicing assistant
    • Answer coverage questions from brokers or policyholders using approved policy language.
  • Claims document summarization
    • Extract key facts from adjuster notes, repair estimates, and incident descriptions before human review.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides