How to Integrate FastAPI for pension funds with LangChain for production AI
Combining FastAPI for pension funds with LangChain gives you a clean production pattern: FastAPI handles the API surface, auth, and request lifecycle, while LangChain handles orchestration, retrieval, and tool use inside your AI agent. That means you can expose pension-specific workflows like benefit queries, contribution summaries, document retrieval, and policy Q&A behind a single API without turning your web layer into agent logic.
The real win is separation of concerns. Your FastAPI service stays predictable for compliance and observability, and LangChain sits behind it as the reasoning layer that can call internal tools, search indexed documents, and format responses for downstream systems.
Prerequisites
- •Python 3.10+
- •FastAPI installed
- •Uvicorn installed
- •LangChain installed
- •An LLM provider configured through environment variables
- •Access to your pension fund data source:
- •REST API
- •SQL database
- •Document store / vector store
- •Basic understanding of async Python
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic httpx
Set your model key:
export OPENAI_API_KEY="your-key"
Integration Steps
- •Create a FastAPI app that exposes pension fund endpoints
Keep your API thin. It should validate input, call internal services, and return structured JSON.
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
app = FastAPI(title="Pension Fund AI API")
class MemberQuery(BaseModel):
member_id: str
question: str
@app.get("/health")
async def health():
return {"status": "ok"}
@app.post("/pension/query")
async def pension_query(payload: MemberQuery):
if not payload.member_id:
raise HTTPException(status_code=400, detail="member_id is required")
return {
"member_id": payload.member_id,
"question": payload.question,
"answer": "Placeholder response from pension service"
}
- •Wrap pension system access as LangChain tools
LangChain works best when your business capabilities are exposed as tools. For production systems, keep these tools deterministic and scoped.
import os
import httpx
from langchain_core.tools import tool
PENSION_API_BASE = os.getenv("PENSION_API_BASE", "http://localhost:8000")
@tool
async def get_member_summary(member_id: str) -> str:
"""Fetch a pension member summary."""
async with httpx.AsyncClient() as client:
resp = await client.get(f"{PENSION_API_BASE}/members/{member_id}/summary")
resp.raise_for_status()
data = resp.json()
return f"Member {data['member_id']} has balance {data['balance']} and status {data['status']}"
@tool
async def ask_pension_policy(question: str) -> str:
"""Ask the pension policy knowledge base."""
async with httpx.AsyncClient() as client:
resp = await client.post(f"{PENSION_API_BASE}/policy/ask", json={"question": question})
resp.raise_for_status()
return resp.json()["answer"]
- •Build a LangChain agent that uses those tools
This is where LangChain adds value. The agent can decide whether to fetch member data or query policy content before generating the final answer.
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a pension fund assistant. Use tools when you need member data or policy details."),
("human", "{input}"),
])
tools = [get_member_summary, ask_pension_policy]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
- •Expose an AI endpoint in FastAPI that calls the agent
This keeps your public interface stable while LangChain handles reasoning behind the scenes.
from fastapi import Depends
class AIRequest(BaseModel):
member_id: str | None = None
question: str
@app.post("/ai/pension-assistant")
async def ai_pension_assistant(payload: AIRequest):
user_input = payload.question
if payload.member_id:
user_input = f"Member ID: {payload.member_id}. Question: {payload.question}"
result = await executor.ainvoke({"input": user_input})
return {
"input": user_input,
"response": result["output"]
}
- •Add production controls around the agent boundary
Do not let the model directly hit arbitrary URLs or generate unbounded tool calls. Put guardrails in the API layer.
from fastapi import Header
@app.post("/ai/pension-assistant-secure")
async def ai_pension_assistant_secure(payload: AIRequest, x_api_key: str = Header(default="")):
if x_api_key != os.getenv("INTERNAL_API_KEY"):
raise HTTPException(status_code=401, detail="Unauthorized")
if len(payload.question) > 500:
raise HTTPException(status_code=400, detail="Question too long")
result = await executor.ainvoke({
"input": f"Member ID: {payload.member_id}. Question: {payload.question}"
})
return {"response": result["output"]}
Testing the Integration
Run the app:
uvicorn main:app --reload --port 8000
Then test the endpoint:
import requests
resp = requests.post(
"http://localhost:8000/ai/pension-assistant",
json={
"member_id": "M12345",
"question": "What is my current pension balance?"
}
)
print(resp.status_code)
print(resp.json())
Expected output:
200
{
"input": "Member ID: M12345. Question: What is my current pension balance?",
"response": "...final answer from the agent..."
}
If you see tool calls in logs and a structured JSON response back from FastAPI, the integration is working.
Real-World Use Cases
- •
Member self-service assistant
- •Let members ask about balances, contribution history, vesting status, and retirement eligibility through one API.
- •
Policy and compliance Q&A
- •Expose fund rules, contribution limits, withdrawal conditions, and benefit calculation policies via an agent backed by approved documents.
- •
Operations copilot
- •Help support teams summarize cases, retrieve account context, and draft responses using controlled tool access through FastAPI endpoints.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit