How to Integrate FastAPI for insurance with LangChain for production AI
Combining FastAPI for insurance with LangChain gives you a clean way to expose production-grade insurance workflows behind HTTP while letting an LLM reason over policy data, claims, and underwriting rules. The practical win is simple: your agent can gather context, call deterministic backend services, and return structured answers without turning your API layer into a prompt soup.
Prerequisites
- •Python 3.10+
- •A FastAPI app already running for your insurance domain
- •LangChain installed with the model provider you want to use
- •An LLM API key configured in environment variables
- •Pydantic models for policy, claim, or customer payloads
- •
uvicornfor local execution - •
httpxfor calling your FastAPI endpoints from LangChain tools
Install the core packages:
pip install fastapi uvicorn httpx langchain langchain-openai pydantic
Integration Steps
- •Expose insurance workflows as typed FastAPI endpoints
Your FastAPI layer should own the business logic. Keep endpoints deterministic and return structured JSON that LangChain can consume reliably.
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI(title="Insurance API")
class ClaimRequest(BaseModel):
policy_id: str
incident_type: str
amount: float
class ClaimResponse(BaseModel):
claim_id: str
status: str
payout_estimate: float
@app.post("/claims/estimate", response_model=ClaimResponse)
def estimate_claim(req: ClaimRequest):
# Replace with underwriting/claims engine logic
payout = min(req.amount * 0.8, 5000.0)
return ClaimResponse(
claim_id=f"CLM-{req.policy_id}",
status="estimated",
payout_estimate=payout,
)
- •Wrap the FastAPI endpoint as a LangChain tool
LangChain works best when external systems are exposed as tools. Use StructuredTool so the agent gets a typed interface instead of free-form text parsing.
import httpx
from langchain_core.tools import StructuredTool
from pydantic import BaseModel, Field
class EstimateClaimArgs(BaseModel):
policy_id: str = Field(..., description="Insurance policy identifier")
incident_type: str = Field(..., description="Type of incident")
amount: float = Field(..., description="Requested claim amount")
def estimate_claim_tool(policy_id: str, incident_type: str, amount: float) -> dict:
payload = {
"policy_id": policy_id,
"incident_type": incident_type,
"amount": amount,
}
response = httpx.post("http://localhost:8000/claims/estimate", json=payload, timeout=10.0)
response.raise_for_status()
return response.json()
claim_tool = StructuredTool.from_function(
func=estimate_claim_tool,
name="estimate_claim",
description="Estimate an insurance claim payout using the claims service",
args_schema=EstimateClaimArgs,
)
- •Build a LangChain agent that can call the insurance API
Use a chat model plus tool calling so the agent decides when to invoke the claims endpoint. This is the pattern you want in production: the model reasons, your API executes.
import os
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
temperature=0,
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are an insurance assistant. Use tools for any claim estimate."),
("human", "{input}"),
])
tools = [claim_tool]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
- •Expose the agent through FastAPI for downstream consumers
This gives you one HTTP surface area for both deterministic endpoints and agentic workflows. In practice, your frontend or internal ops app calls this route and gets a natural-language answer backed by real service calls.
from fastapi import Body
@app.post("/agent/claim-help")
def claim_help(message: str = Body(embed=True)):
result = executor.invoke({"input": message})
return {"answer": result["output"]}
- •Keep production boundaries tight
Do not let LangChain talk directly to databases or internal services unless you control the tool boundary. Put auth, rate limits, retries, and audit logs on the FastAPI side.
A sane production setup usually looks like this:
| Layer | Responsibility |
|---|---|
| FastAPI | Auth, validation, business rules, audit logging |
| LangChain | Tool selection, orchestration, response formatting |
| Insurance engine | Claims logic, policy rules, pricing |
| Observability | Tracing requests across agent + API calls |
Testing the Integration
Run the API:
uvicorn main:app --reload --port 8000
Then test both layers end-to-end:
import httpx
response = httpx.post(
"http://localhost:8000/agent/claim-help",
json={"message": "Estimate a claim for policy P123 after hail damage with amount 4200"}
)
print(response.status_code)
print(response.json())
Expected output:
{
"answer": "The estimated payout for policy P123 is 3360.0."
}
If you want to verify the raw claims endpoint directly:
import httpx
resp = httpx.post(
"http://localhost:8000/claims/estimate",
json={"policy_id": "P123", "incident_type": "hail damage", "amount": 4200}
)
print(resp.json())
Expected output:
{
"claim_id": "CLM-P123",
"status": "estimated",
"payout_estimate": 3360.0
}
Real-World Use Cases
- •
Claims triage assistant
Let adjusters ask natural-language questions like “What’s the expected payout?” while LangChain calls your claims services through FastAPI. - •
Policy servicing bot
Build an agent that checks coverage details, renewal status, deductibles, and endorsements from protected endpoints. - •
Underwriting support workflow
Route broker submissions through FastAPI validation endpoints and let LangChain summarize missing fields or next steps for underwriters.
The main pattern here is stable API first, agent second. If your FastAPI contract is clean and typed, LangChain becomes useful instead of fragile.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit