How to Integrate FastAPI for insurance with LangChain for AI agents

By Cyprian AaronsUpdated 2026-04-21
fastapi-for-insurancelangchainai-agents

Combining FastAPI for insurance with LangChain gives you a clean way to expose insurance workflows as APIs while letting an AI agent reason over them. The useful pattern is simple: FastAPI handles policy, claims, and quote operations; LangChain decides when to call those operations and how to turn raw API responses into agent-friendly answers.

This is the right setup when you want an agent that can fetch policy details, check claim status, triage FNOL requests, or trigger underwriting workflows without hardcoding every branch in your app.

Prerequisites

  • Python 3.10+
  • A running FastAPI app for your insurance domain
  • fastapi, uvicorn, and pydantic
  • langchain and langchain-openai or another chat model provider
  • API access to your insurance backend
  • Environment variables set for model credentials, for example:
    • OPENAI_API_KEY
    • any internal insurance API tokens if your FastAPI app is behind auth

Install the basics:

pip install fastapi uvicorn pydantic langchain langchain-openai requests

Integration Steps

  1. Expose insurance operations in FastAPI

Start by wrapping the insurance actions you want the agent to use. Keep the endpoints narrow and deterministic.

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

app = FastAPI(title="Insurance API")

class PolicyLookupRequest(BaseModel):
    policy_number: str

@app.post("/policies/lookup")
def lookup_policy(payload: PolicyLookupRequest):
    # Replace with real DB/service call
    if payload.policy_number == "POL-123":
        return {
            "policy_number": "POL-123",
            "holder_name": "A. Moyo",
            "status": "active",
            "product": "motor"
        }
    raise HTTPException(status_code=404, detail="Policy not found")

For production, add auth, logging, and request validation around every endpoint. The agent should only call endpoints that are safe to expose.

  1. Run the FastAPI service and confirm it works

Use Uvicorn locally before wiring LangChain into it.

uvicorn insurance_api:app --reload --port 8000

You should be able to hit:

curl -X POST http://localhost:8000/policies/lookup \
  -H "Content-Type: application/json" \
  -d '{"policy_number":"POL-123"}'

Expected response:

{
  "policy_number": "POL-123",
  "holder_name": "A. Moyo",
  "status": "active",
  "product": "motor"
}
  1. Wrap the FastAPI endpoint as a LangChain tool

LangChain agents work well when you present external systems as tools. Here we use a simple Python function that calls the FastAPI endpoint with requests, then convert it into a LangChain tool.

import requests
from langchain_core.tools import tool

BASE_URL = "http://localhost:8000"

@tool
def get_policy_details(policy_number: str) -> dict:
    """Fetch policy details from the insurance API."""
    response = requests.post(
        f"{BASE_URL}/policies/lookup",
        json={"policy_number": policy_number},
        timeout=10,
    )
    response.raise_for_status()
    return response.json()

This pattern keeps your domain logic in FastAPI and your orchestration logic in LangChain. If your backend changes later, you update one tool wrapper instead of rewriting agent code.

  1. Build a LangChain agent that can call the tool

Now connect the tool to a chat model and let the agent decide when to use it.

from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_policy_details]

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.OPENAI_FUNCTIONS,
    verbose=True,
)

result = agent.invoke(
    {"input": "Check policy POL-123 and tell me whether it's active."}
)

print(result["output"])

This gives you function-calling behavior without manually routing every user request. For insurance use cases, keep the prompt tight so the model stays inside approved actions.

  1. Add structured outputs for downstream systems

If you want the agent output to feed a CRM or claims workflow, force structured data instead of free text.

from pydantic import BaseModel

class PolicySummary(BaseModel):
    policy_number: str
    holder_name: str
    status: str
    product: str

raw = get_policy_details.invoke({"policy_number": "POL-123"})
summary = PolicySummary(**raw)

print(summary.model_dump())

This is useful when another service needs stable fields like status or product. In insurance systems, structured output matters more than pretty text.

Testing the Integration

Use one test path end-to-end: FastAPI endpoint up, tool wrapper working, then agent invocation.

def test_policy_lookup():
    data = get_policy_details.invoke({"policy_number": "POL-123"})
    assert data["status"] == "active"
    assert data["product"] == "motor"

test_policy_lookup()
print("Integration test passed")

Expected output:

Integration test passed

If you want a stronger check, run a full agent query:

response = agent.invoke(
    {"input": "Summarize policy POL-123 in one sentence."}
)
print(response["output"])

Expected output:

Policy POL-123 is an active motor policy held by A. Moyo.

Real-World Use Cases

  • Claims triage assistant

    • Agent collects incident details from a customer chat.
    • FastAPI stores FNOL data and returns claim references.
    • LangChain summarizes next steps for adjusters or customers.
  • Policy servicing copilot

    • Agent checks coverage, renewal dates, and beneficiary data.
    • FastAPI exposes controlled endpoints for each servicing action.
    • Good fit for internal ops teams handling high-volume requests.
  • Underwriting pre-check assistant

    • Agent asks follow-up questions based on submission type.
    • FastAPI queries rating services or underwriting rules engines.
    • LangChain turns the result into a concise risk summary for underwriters.

The main design rule is this: keep business logic in FastAPI, keep orchestration in LangChain. That separation makes your insurance workflows easier to secure, test, and change without breaking the agent layer.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides