How to Integrate FastAPI for wealth management with LangChain for production AI
Combining FastAPI for wealth management with LangChain gives you a clean way to expose regulated financial workflows through HTTP while keeping the reasoning layer in an AI agent. The pattern is simple: FastAPI handles request validation, auth, and operational controls; LangChain handles orchestration, tool use, and structured responses.
For wealth management teams, this unlocks agentic experiences like portfolio summaries, suitability checks, client Q&A, and advisor copilots without putting LLM logic directly inside your API surface.
Prerequisites
- •Python 3.10+
- •FastAPI installed
- •Uvicorn installed
- •LangChain installed
- •An LLM provider configured through environment variables
- •A wealth-management backend or service layer you can call from Python
- •Basic familiarity with:
- •
FastAPI() - •path operations like
@app.post() - •LangChain
ChatPromptTemplate - •LangChain
RunnableLambdaorTool
- •
- •Optional but recommended:
- •Pydantic v2
- •API authentication middleware
- •logging and tracing
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic
Integration Steps
1. Define the FastAPI contract for wealth requests
Start with a strict request/response schema. In production, this is where you enforce input validation for client IDs, account IDs, and the question your agent should answer.
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field
app = FastAPI(title="Wealth Management AI API")
class WealthQuery(BaseModel):
client_id: str = Field(..., min_length=3)
account_id: str = Field(..., min_length=3)
question: str = Field(..., min_length=10)
class WealthAnswer(BaseModel):
client_id: str
account_id: str
answer: str
source: str
@app.get("/health")
def health():
return {"status": "ok"}
This gives you a stable API boundary. Everything downstream can change without breaking clients as long as this schema stays consistent.
2. Wrap your wealth data access as a callable function
Keep brokerage or portfolio access outside the chain. LangChain should call a Python function that already knows how to fetch holdings, performance, or policy constraints.
from typing import Dict
def fetch_portfolio_context(client_id: str, account_id: str) -> Dict:
# Replace with real integration:
# - internal portfolio service
# - CRM/OMS/PMS adapter
# - read-only database query
return {
"client_id": client_id,
"account_id": account_id,
"risk_profile": "moderate",
"holdings": [
{"symbol": "AAPL", "weight": 0.18},
{"symbol": "MSFT", "weight": 0.16},
{"symbol": "BND", "weight": 0.24},
],
"constraints": ["No single equity above 20%", "Maintain equity allocation under 60%"],
}
This is the right place to enforce authorization checks too. If the caller cannot access the requested account, fail here before any model call happens.
3. Build the LangChain reasoning layer around that context
Use LangChain to turn raw portfolio data into an answerable prompt. A production-friendly pattern is to keep prompt assembly explicit and keep model invocation isolated.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0,
api_key=os.environ["OPENAI_API_KEY"],
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a wealth management assistant. Be concise, factual, and avoid making recommendations outside the provided data."),
("human", """
Client context:
{portfolio_context}
User question:
{question}
Return a short answer grounded only in the provided context.
""")
])
def answer_with_langchain(portfolio_context: dict, question: str) -> str:
chain = prompt | llm
response = chain.invoke({
"portfolio_context": portfolio_context,
"question": question,
})
return response.content
The important part here is that LangChain is not reaching into your systems directly. It receives curated context from your backend and produces an answer from that bounded input.
4. Expose the chain through a FastAPI endpoint
Now connect both layers in one route. FastAPI validates the request; your service function fetches context; LangChain generates the response; FastAPI returns JSON.
@app.post("/wealth/ask", response_model=WealthAnswer)
def ask_wealth_question(payload: WealthQuery):
portfolio_context = fetch_portfolio_context(
client_id=payload.client_id,
account_id=payload.account_id,
)
if not portfolio_context:
raise HTTPException(status_code=404, detail="Portfolio not found")
answer = answer_with_langchain(portfolio_context, payload.question)
return WealthAnswer(
client_id=payload.client_id,
account_id=payload.account_id,
answer=answer,
source="langchain+fastapi"
)
This endpoint is production-shaped because it keeps business logic deterministic and makes the LLM one step in a larger controlled workflow.
5. Run it behind Uvicorn and keep deployment simple
Use Uvicorn as your ASGI server. In real deployments you’d put this behind an API gateway with auth, rate limiting, and audit logs.
# save as main.py then run:
# uvicorn main:app --reload --host 0.0.0.0 --port 8000
if __name__ == "__main__":
import uvicorn
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
If you need background tasks later for async enrichment or report generation, FastAPI’s BackgroundTasks fits well without changing this contract.
Testing the Integration
Hit the endpoint with a local request and verify that the response includes a grounded summary.
from fastapi.testclient import TestClient
from main import app
client = TestClient(app)
def test_wealth_agent():
response = client.post(
"/wealth/ask",
json={
"client_id": "cli_123",
"account_id": "acc_456",
"question": "What is my current allocation concentration risk?"
},
)
print(response.status_code)
print(response.json())
test_wealth_agent()
Expected output:
200
{
"client_id": "cli_123",
"account_id": "acc_456",
"answer": "...LLM-generated summary grounded in portfolio context...",
"source": "langchain+fastapi"
}
If you want deterministic tests in CI, mock ChatOpenAI and assert on the returned payload shape instead of model wording.
Real-World Use Cases
- •Advisor copilot endpoints that summarize holdings, explain exposure drift, and draft client-ready commentary from approved portfolio data.
- •Client self-service assistants that answer questions about balances, allocations, and policy constraints without exposing raw backend systems.
- •Compliance-aware workflows where FastAPI enforces access control and audit logging while LangChain drafts human-readable explanations from structured data.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit