How to Integrate FastAPI for banking with LangChain for multi-agent systems
Combining FastAPI for banking with LangChain gives you a clean way to expose regulated banking actions as HTTP services while letting multiple agents reason over them, delegate tasks, and coordinate decisions. The practical win is simple: one agent can fetch balances, another can validate payment rules, and a third can summarize customer intent before anything hits a core banking system.
Prerequisites
- •Python 3.10+
- •
fastapi - •
uvicorn - •
langchain - •
langchain-openaior another model provider - •
httpx - •A running FastAPI banking service with endpoints like:
- •
GET /accounts/{account_id}/balance - •
POST /payments/transfer - •
GET /customers/{customer_id}
- •
- •API auth configured:
- •OAuth2 bearer token, mTLS, or signed internal service token
- •Basic familiarity with:
- •FastAPI dependency injection
- •LangChain tools and agents
- •JSON request/response contracts
Integration Steps
- •
Expose banking operations as typed FastAPI endpoints
Start by making the banking API explicit and narrow. In banking, the agent should never talk to the database directly; it should call approved endpoints with strict schemas.
from fastapi import FastAPI, HTTPException, Depends from pydantic import BaseModel, Field app = FastAPI(title="Banking API") class TransferRequest(BaseModel): from_account: str = Field(..., min_length=8) to_account: str = Field(..., min_length=8) amount: float = Field(..., gt=0) @app.get("/accounts/{account_id}/balance") async def get_balance(account_id: str): # Replace with real core-banking lookup return {"account_id": account_id, "currency": "USD", "balance": 12500.75} @app.post("/payments/transfer") async def transfer_funds(payload: TransferRequest): if payload.amount > 5000: raise HTTPException(status_code=403, detail="Transfer exceeds approval threshold") return {"status": "approved", "reference": "TRX-983244"} - •
Wrap FastAPI endpoints as LangChain tools
LangChain agents work best when you give them small tools with deterministic contracts. Here we call the FastAPI service over HTTP using
httpx, then expose those calls as tools via@tool.import os import httpx from langchain_core.tools import tool BANKING_API_BASE_URL = os.getenv("BANKING_API_BASE_URL", "http://localhost:8000") BANKING_TOKEN = os.getenv("BANKING_TOKEN", "dev-token") async def _auth_headers(): return {"Authorization": f"Bearer {BANKING_TOKEN}"} @tool async def get_account_balance(account_id: str) -> str: """Fetch account balance from the banking API.""" async with httpx.AsyncClient(timeout=10) as client: resp = await client.get( f"{BANKING_API_BASE_URL}/accounts/{account_id}/balance", headers=await _auth_headers(), ) resp.raise_for_status() return resp.text @tool async def transfer_funds(from_account: str, to_account: str, amount: float) -> str: """Initiate an internal funds transfer.""" payload = { "from_account": from_account, "to_account": to_account, "amount": amount, } async with httpx.AsyncClient(timeout=10) as client: resp = await client.post( f"{BANKING_API_BASE_URL}/payments/transfer", json=payload, headers=await _auth_headers(), ) resp.raise_for_status() return resp.text - •
Create a LangChain agent that can choose between banking tools
For multi-agent systems, start with one coordinator agent that routes to specialist tools. In production you may split this into separate risk, payments, and support agents later.
from langchain_openai import ChatOpenAI from langchain.agents import create_tool_calling_agent, AgentExecutor from langchain_core.prompts import ChatPromptTemplate llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) prompt = ChatPromptTemplate.from_messages([ ("system", "You are a banking assistant. Use tools only when needed."), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ]) tools = [get_account_balance, transfer_funds] agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt) executor = AgentExecutor(agent=agent, tools=tools, verbose=True) # Example usage in an async context: # result = await executor.ainvoke({"input": "Check balance for account 12345678"}) - •
Add a second agent for validation and orchestration
Multi-agent systems need separation of concerns. One agent can draft the action plan while another enforces policy before any transfer is executed.
from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI policy_llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) policy_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a banking risk validator. Approve only if the transfer is low risk."), ("human", "Transfer request: {request_json}"), ]) policy_chain = policy_prompt | policy_llm async def validate_transfer(request_json: str) -> str: response = await policy_chain.ainvoke({"request_json": request_json}) return response.content # Orchestrator pattern: # 1) validate_transfer() # 2) if approved -> transfer_funds tool via main agent or direct call - •
Expose the agent through your own FastAPI orchestration endpoint
This is the cleanest production pattern: your external clients hit one API endpoint, and your backend decides whether to query balance data or execute a workflow across multiple agents.
from fastapi import FastAPI from pydantic import BaseModel app = FastAPI(title="Agent Orchestrator") class AgentRequest(BaseModel): input: str @app.post("/agent/chat") async def agent_chat(payload: AgentRequest): result = await executor.ainvoke({"input": payload.input}) return {"output": result["output"]}
Testing the Integration
Run both services:
uvicorn banking_api:app --reload --port 8000
uvicorn orchestrator:app --reload --port 9000
Then test the full path:
import httpx
async def test_agent():
async with httpx.AsyncClient() as client:
resp = await client.post(
"http://localhost:9000/agent/chat",
json={"input": "Check balance for account 12345678"},
)
print(resp.status_code)
print(resp.json())
# Expected output:
# 200
# {'output': '{"account_id":"12345678","currency":"USD","balance":12500.75}'}
If you want to verify a transfer flow:
# Input:
# "Transfer 250 USD from account 12345678 to account 87654321"
# Expected output shape:
# {
# "output": "{\"status\":\"approved\",\"reference\":\"TRX-983244\"}"
# }
Real-World Use Cases
- •
Customer servicing agent
- •Fetch balances, recent transactions, and beneficiary details through approved FastAPI endpoints.
- •Let LangChain handle intent parsing and response summarization.
- •
Payments approval workflow
- •One agent drafts the transfer.
- •A second policy agent checks thresholds, velocity rules, and account constraints.
- •Only then does the FastAPI banking service execute the payment.
- •
Ops assistant for bank staff
- •Route requests like “why was this payment rejected?” into audit lookup APIs.
- •Use multi-agent coordination to combine transaction logs, compliance notes, and customer context.
The main design rule is straightforward: keep banking logic in FastAPI services with strict schemas and use LangChain agents as orchestrators, not owners of business logic. That gives you auditable APIs on one side and flexible reasoning on the other without turning your core system into an unbounded prompt-driven mess.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit