How to Integrate CrewAI for insurance with FastAPI for AI agents

By Cyprian AaronsUpdated 2026-04-22
crewai-for-insurancefastapiai-agents

Combining CrewAI for insurance with FastAPI gives you a clean way to expose multi-agent workflows as production APIs. That matters when you need an AI system that can triage claims, summarize policy documents, or route underwriting tasks through a standard HTTP interface your internal apps can call.

Prerequisites

  • Python 3.10+
  • fastapi
  • uvicorn
  • crewai
  • Access to your LLM provider credentials in environment variables
  • A working CrewAI insurance setup with:
    • Agent
    • Task
    • Crew
    • Process
  • Basic understanding of REST APIs and JSON payloads

Install the dependencies:

pip install fastapi uvicorn crewai pydantic

Set your API key:

export OPENAI_API_KEY="your-key"

Integration Steps

  1. Create your CrewAI insurance agents and tasks

Start by defining the agents that handle insurance-specific work. In a real system, you might split this into intake, policy analysis, and claims review.

from crewai import Agent, Task, Crew, Process

claims_agent = Agent(
    role="Insurance Claims Analyst",
    goal="Review claim submissions and identify missing information",
    backstory="You specialize in insurance claim triage and document review.",
    verbose=True,
)

policy_agent = Agent(
    role="Policy Reviewer",
    goal="Check policy language against a submitted claim",
    backstory="You understand policy terms, exclusions, and coverage conditions.",
    verbose=True,
)

triage_task = Task(
    description=(
        "Review the incoming insurance claim data and identify: "
        "1) missing fields, 2) likely coverage concerns, 3) next action."
    ),
    expected_output="A concise triage summary for the claims team.",
    agent=claims_agent,
)

policy_task = Task(
    description=(
        "Compare the claim details against policy context and flag any exclusions "
        "or coverage ambiguities."
    ),
    expected_output="A policy coverage assessment.",
    agent=policy_agent,
)
  1. Wrap the crew in a reusable service function

Keep the CrewAI orchestration out of your route handler. That makes it easier to test and swap models later.

def run_insurance_crew(claim_text: str, policy_text: str):
    crew = Crew(
        agents=[claims_agent, policy_agent],
        tasks=[triage_task, policy_task],
        process=Process.sequential,
        verbose=True,
    )

    result = crew.kickoff(inputs={
        "claim_text": claim_text,
        "policy_text": policy_text,
    })

    return str(result)

This is the key integration point. Crew.kickoff() is what your FastAPI endpoint will call.

  1. Expose the crew through FastAPI

Now create an API layer that accepts JSON input and returns the agent output.

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI(title="Insurance AI Agent API")

class ClaimRequest(BaseModel):
    claim_text: str
    policy_text: str

class ClaimResponse(BaseModel):
    analysis: str

@app.post("/analyze-claim", response_model=ClaimResponse)
def analyze_claim(payload: ClaimRequest):
    analysis = run_insurance_crew(
        claim_text=payload.claim_text,
        policy_text=payload.policy_text,
    )
    return ClaimResponse(analysis=analysis)

This gives you a standard HTTP endpoint that any internal system can call.

  1. Add async-friendly execution for production use

CrewAI execution is typically synchronous. If you expect traffic spikes or long-running analysis, move execution into a thread so your FastAPI server stays responsive.

import asyncio

@app.post("/analyze-claim-async", response_model=ClaimResponse)
async def analyze_claim_async(payload: ClaimRequest):
    analysis = await asyncio.to_thread(
        run_insurance_crew,
        payload.claim_text,
        payload.policy_text,
    )
    return ClaimResponse(analysis=analysis)

Use this pattern when claims review can take several seconds because of long context windows or multiple agent steps.

  1. Run the API locally

Start FastAPI with Uvicorn:

uvicorn main:app --reload --port 8000

If you split code across files, keep your route module name aligned with the command.

Testing the Integration

Use curl or Python requests to verify the endpoint returns an agent-generated response.

import requests

url = "http://127.0.0.1:8000/analyze-claim"

payload = {
    "claim_text": "Customer reports water damage after pipe burst in kitchen.",
    "policy_text": "Policy covers sudden accidental water damage but excludes gradual leaks."
}

response = requests.post(url, json=payload)
print(response.status_code)
print(response.json())

Expected output will look like this:

{
  "analysis": "..."
}

If everything is wired correctly, you should get:

  • HTTP status 200
  • A JSON body containing the combined CrewAI assessment
  • Text that mentions missing information, coverage checks, or next steps depending on your prompts

Real-World Use Cases

  • Claims intake triage: Accept FNOL data from a portal, have agents validate completeness, then return a structured summary for adjusters.
  • Policy coverage Q&A: Let customer support systems send policy text plus customer questions to an agent workflow exposed via FastAPI.
  • Underwriting pre-screening: Route application data through multiple agents that check risk signals, document gaps, and escalation triggers before human review.

The practical pattern here is simple: use CrewAI for orchestration and reasoning, then use FastAPI as the contract layer. That gives you an AI agent service that fits into existing insurance systems without inventing a new transport or deployment model.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides