How to Integrate FastAPI for investment banking with LangChain for multi-agent systems
Opening
If you’re building banking workflows, FastAPI gives you the API layer to expose pricing, portfolio, and trade ops endpoints cleanly. LangChain sits on top as the orchestration layer for multi-agent systems, letting one agent fetch market data, another summarize risk, and a third draft a client-facing response.
The useful pattern here is not “LLM talks to API.” It’s “agents call controlled banking services through FastAPI, with LangChain handling routing, memory, and tool use.”
Prerequisites
- •Python 3.10+
- •
fastapi - •
uvicorn - •
langchain - •
langchain-openaior another LLM provider package - •
httpx - •A running FastAPI service with investment banking endpoints
- •An API key for your LLM provider
- •Basic understanding of:
- •REST APIs
- •Pydantic models
- •LangChain tools and agents
Install the packages:
pip install fastapi uvicorn httpx langchain langchain-openai pydantic
Integration Steps
- •Build the FastAPI service for banking operations
Start by exposing the banking functions as explicit endpoints. Keep them narrow: pricing lookup, portfolio summary, and trade status are good candidates.
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
app = FastAPI(title="Investment Banking API")
class TradeRequest(BaseModel):
symbol: str
quantity: int
side: str # buy or sell
class PortfolioResponse(BaseModel):
account_id: str
total_value: float
positions: List[dict]
@app.get("/portfolio/{account_id}", response_model=PortfolioResponse)
def get_portfolio(account_id: str):
return {
"account_id": account_id,
"total_value": 12500000.50,
"positions": [
{"symbol": "AAPL", "quantity": 1200, "market_value": 240000.0},
{"symbol": "MSFT", "quantity": 800, "market_value": 320000.0},
],
}
@app.post("/trade")
def create_trade(trade: TradeRequest):
if trade.side not in ["buy", "sell"]:
raise HTTPException(status_code=400, detail="side must be buy or sell")
return {
"trade_id": "TRD-98231",
"status": "accepted",
"symbol": trade.symbol,
"quantity": trade.quantity,
"side": trade.side,
}
Run it:
uvicorn main:app --reload --port 8000
- •Wrap the FastAPI endpoints as LangChain tools
LangChain agents need callable tools. The cleanest pattern is to wrap your FastAPI calls in Python functions using httpx, then expose them as tools.
import httpx
from langchain_core.tools import tool
BASE_URL = "http://localhost:8000"
@tool
def fetch_portfolio(account_id: str) -> str:
"""Fetch portfolio data for an investment banking account."""
response = httpx.get(f"{BASE_URL}/portfolio/{account_id}", timeout=10)
response.raise_for_status()
return response.text
@tool
def submit_trade(symbol: str, quantity: int, side: str) -> str:
"""Submit a trade request to the banking API."""
payload = {"symbol": symbol, "quantity": quantity, "side": side}
response = httpx.post(f"{BASE_URL}/trade", json=payload, timeout=10)
response.raise_for_status()
return response.text
This keeps the agent from touching your database or internal services directly. It only sees controlled API boundaries.
- •Create a LangChain multi-agent setup
For multi-agent systems, split responsibilities. One agent can act as a portfolio analyst, another as an execution assistant. Use a supervisor-style chain or just route tasks with tool-enabled agents.
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are an investment banking operations assistant."),
("human", "{input}"),
])
tools = [fetch_portfolio, submit_trade]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
If you want true multi-agent behavior, create separate executors with different prompts:
portfolio_prompt = ChatPromptTemplate.from_messages([
("system", "You analyze portfolios and explain exposure."),
("human", "{input}"),
])
execution_prompt = ChatPromptTemplate.from_messages([
("system", "You handle trade execution requests and validate order details."),
("human", "{input}"),
])
portfolio_agent = AgentExecutor(
agent=create_tool_calling_agent(llm=llm, tools=[fetch_portfolio], prompt=portfolio_prompt),
tools=[fetch_portfolio],
)
execution_agent = AgentExecutor(
agent=create_tool_calling_agent(llm=llm, tools=[submit_trade], prompt=execution_prompt),
tools=[submit_trade],
)
- •Add a simple supervisor to route work between agents
In production, one coordinator agent should decide which specialist handles the task. This avoids overloading one model prompt with unrelated responsibilities.
def route_request(user_input: str):
if any(word in user_input.lower() for word in ["portfolio", "exposure", "holdings"]):
return portfolio_agent.invoke({"input": user_input})
if any(word in user_input.lower() for word in ["trade", "buy", "sell", "execute"]):
return execution_agent.invoke({"input": user_input})
return {"output": "No matching agent found."}
That’s enough to get a practical multi-agent system running without introducing unnecessary orchestration frameworks too early.
- •Harden the integration for banking use
Banking systems need guardrails. Add authentication headers to your FastAPI calls and validate every request before it reaches execution logic.
import os
import httpx
API_TOKEN = os.getenv("BANKING_API_TOKEN")
def auth_headers():
return {"Authorization": f"Bearer {API_TOKEN}"}
@tool
def secure_fetch_portfolio(account_id: str) -> str:
"""Fetch portfolio data using authenticated API access."""
with httpx.Client(timeout=10) as client:
response = client.get(
f"{BASE_URL}/portfolio/{account_id}",
headers=auth_headers(),
)
response.raise_for_status()
return response.text
On the FastAPI side, enforce auth with dependency injection so only approved callers can use sensitive routes.
Testing the Integration
Use a direct invocation against the routed system to verify both the API call and LangChain orchestration work.
result_1 = route_request("Show me the portfolio exposure for account IB-1001")
print(result_1)
result_2 = route_request("Buy 500 shares of NVDA")
print(result_2)
Expected output:
{
'output': 'Account IB-1001 has concentrated exposure in AAPL and MSFT with total value around $12.5M...'
}
{
'output': '{"trade_id":"TRD-98231","status":"accepted","symbol":"NVDA","quantity":500,"side":"buy"}'
}
If you see valid JSON from your FastAPI endpoints and natural-language summaries from LangChain, the integration is working.
Real-World Use Cases
- •
Trade support copilots
- •An analyst asks for current holdings.
- •The portfolio agent fetches data through FastAPI.
- •The supervisor agent summarizes exposure and suggests next actions.
- •
Pre-trade validation assistants
- •A trader submits an order request.
- •The execution agent checks size limits and route rules before calling
/trade. - •The system returns structured approval or rejection messages.
- •
Client reporting workflows
- •One agent pulls portfolio data.
- •Another generates commentary on performance and risk.
- •A final agent formats the report for email or CRM delivery.
This pattern works because each layer does one job well. FastAPI exposes controlled banking capabilities; LangChain coordinates agents around those capabilities without turning your core systems into prompt-driven chaos.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit