How to Integrate FastAPI for insurance with LangChain for multi-agent systems
Combining FastAPI for insurance with LangChain gives you a clean way to expose insurance workflows as API endpoints while letting multiple agents coordinate around claims, underwriting, policy lookup, and document extraction. The pattern is simple: FastAPI handles the request/response boundary, and LangChain orchestrates the reasoning, tool use, and agent handoffs behind it.
This is useful when you need deterministic API contracts for regulated workflows, but still want flexible multi-agent behavior for tasks like triaging a claim, checking policy coverage, and drafting customer responses.
Prerequisites
- •Python 3.10+
- •
fastapi - •
uvicorn - •
langchain - •
langchain-openaior another LangChain model provider - •
pydantic - •An OpenAI API key or equivalent LLM provider key
- •A basic FastAPI app already approved for your insurance domain
- •Familiarity with:
- •FastAPI route decorators like
@app.post() - •LangChain tools via
@tool - •LangChain agents via
create_openai_tools_agentor similar
- •FastAPI route decorators like
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic
Integration Steps
- •Define your insurance API contract in FastAPI
Keep the endpoint strict. Insurance systems need predictable payloads, so use Pydantic models for claims, policy numbers, and customer context.
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI(title="Insurance AI API")
class ClaimRequest(BaseModel):
claim_id: str
policy_number: str
incident_type: str
description: str
class ClaimResponse(BaseModel):
status: str
recommendation: str
@app.post("/claims/triage", response_model=ClaimResponse)
async def triage_claim(payload: ClaimRequest):
return ClaimResponse(
status="received",
recommendation="Pending agent review"
)
- •Wrap insurance operations as LangChain tools
The agent should not guess policy data. Expose concrete functions as tools so the LLM can call them deterministically.
from langchain_core.tools import tool
@tool
def lookup_policy(policy_number: str) -> dict:
"""Fetch policy details by policy number."""
# Replace with real DB or core insurance system call.
return {
"policy_number": policy_number,
"coverage_limit": 50000,
"deductible": 1000,
"active": True,
"product_line": "home"
}
@tool
def estimate_claim_severity(incident_type: str, description: str) -> dict:
"""Estimate severity for an incoming claim."""
severity = "high" if "fire" in description.lower() else "medium"
return {
"incident_type": incident_type,
"severity": severity,
"route_to_human": severity == "high"
}
- •Create a LangChain multi-agent workflow
For insurance, a single agent often becomes messy. Use one agent to gather facts and another to produce a recommendation, or keep one agent with multiple tools and clear instructions.
import os
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
)
prompt = ChatPromptTemplate.from_messages([
("system", """
You are an insurance triage assistant.
Use tools before making any recommendation.
If coverage is inactive or severity is high, route to human review.
"""),
("human", "{input}"),
])
tools = [lookup_policy, estimate_claim_severity]
agent = create_openai_tools_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
- •Call the LangChain executor from your FastAPI endpoint
This is where the integration happens. FastAPI receives the request; LangChain handles the reasoning; your API returns a structured answer.
from fastapi import HTTPException
@app.post("/claims/triage/agent")
async def triage_claim_with_agent(payload: ClaimRequest):
try:
result = await executor.ainvoke({
"input": f"""
Review this insurance claim:
claim_id={payload.claim_id}
policy_number={payload.policy_number}
incident_type={payload.incident_type}
description={payload.description}
Return a concise triage decision.
"""
})
return {
"claim_id": payload.claim_id,
"agent_output": result["output"]
}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
- •Add structured output for downstream systems
Insurance teams usually want machine-readable decisions. Don’t return raw prose only; wrap the final response into fields that can be consumed by workflow engines.
from pydantic import BaseModel
class TriageDecision(BaseModel):
claim_id: str
decision: str
route_to_human: bool
notes: str
@app.post("/claims/triage/structured", response_model=TriageDecision)
async def triage_structured(payload: ClaimRequest):
result = await executor.ainvoke({
"input": f"""
Analyze claim {payload.claim_id}.
Policy: {payload.policy_number}
Incident: {payload.incident_type}
Details: {payload.description}
Respond with decision, route_to_human, and notes.
"""
})
output = result["output"]
return TriageDecision(
claim_id=payload.claim_id,
decision="needs_review" if "human" in output.lower() else "auto_triaged",
route_to_human="human" in output.lower(),
notes=output,
)
Testing the Integration
Run the API:
uvicorn main:app --reload
Send a test request:
import requests
response = requests.post(
"http://127.0.0.1:8000/claims/triage/agent",
json={
"claim_id": "CLM-10021",
"policy_number": "POL-77881",
"incident_type": "property_damage",
"description": "Kitchen fire caused smoke damage across the apartment."
}
)
print(response.status_code)
print(response.json())
Expected output:
{
"claim_id": "CLM-10021",
"agent_output": "This claim should be routed to human review because fire-related damage is high severity..."
}
If you want to verify tool usage specifically, run with verbose=True in AgentExecutor and confirm the agent calls lookup_policy and estimate_claim_severity before producing the final recommendation.
Real-World Use Cases
- •
Claims triage orchestration
- •One agent checks coverage.
- •Another estimates severity.
- •A final agent drafts the next action for adjusters.
- •
Underwriting pre-screening
- •FastAPI exposes applicant data intake.
- •LangChain agents assess risk signals from forms and documents.
- •The system routes borderline cases to underwriters.
- •
Customer service automation
- •Multi-agent flows answer policy questions, fetch endorsements, and summarize next steps.
- •FastAPI keeps the interface stable for CRM or portal integrations.
The production pattern here is straightforward: keep FastAPI as the control plane and use LangChain as the reasoning layer. That separation makes it easier to audit decisions, swap models later, and keep your insurance APIs stable while your agent logic evolves.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit