How to Integrate FastAPI for lending with LangChain for production AI
FastAPI for lending gives you the API surface for loan workflows, while LangChain gives you the orchestration layer for reasoning, retrieval, and tool use. Put them together and you can build an agent that answers borrower questions, checks loan status, pulls policy context, and triggers underwriting or servicing actions through a production-grade API.
The useful pattern is simple: FastAPI owns request validation, auth, and business endpoints; LangChain owns the conversational layer and tool routing. That split keeps your AI logic out of your HTTP handlers and makes the system easier to test, scale, and audit.
Prerequisites
- •Python 3.10+
- •A FastAPI app for your lending domain
- •LangChain installed
- •An LLM provider configured via environment variables
- •
uvicornfor local execution - •
pydanticfor request/response models - •Access to your lending backend:
- •loan origination service
- •loan servicing API
- •document or policy store
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic httpx
Set environment variables:
export OPENAI_API_KEY="your-key"
export LENDING_API_BASE_URL="https://api.yourbank.com"
Integration Steps
- •Create the FastAPI lending service
Start with a clean API contract. Your FastAPI app should expose explicit endpoints for loan data instead of letting the agent call internal functions directly.
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import httpx
import os
app = FastAPI(title="Lending API")
LENDING_API_BASE_URL = os.getenv("LENDING_API_BASE_URL", "http://localhost:9000")
class LoanStatusResponse(BaseModel):
loan_id: str
status: str
balance: float
@app.get("/loans/{loan_id}", response_model=LoanStatusResponse)
async def get_loan(loan_id: str):
async with httpx.AsyncClient() as client:
resp = await client.get(f"{LENDING_API_BASE_URL}/loans/{loan_id}")
if resp.status_code != 200:
raise HTTPException(status_code=resp.status_code, detail="Loan not found")
return resp.json()
This endpoint is what your LangChain tool will call. Keep it narrow and deterministic.
- •Wrap FastAPI endpoints as LangChain tools
Use LangChain tools to expose the lending API to the agent. This keeps the LLM from guessing at business logic.
import httpx
import os
from langchain_core.tools import tool
LENDING_API_BASE_URL = os.getenv("LENDING_API_BASE_URL", "http://localhost:9000")
@tool
async def get_loan_status(loan_id: str) -> str:
"""Fetch current loan status and balance by loan ID."""
async with httpx.AsyncClient(timeout=10) as client:
resp = await client.get(f"{LENDING_API_BASE_URL}/loans/{loan_id}")
resp.raise_for_status()
data = resp.json()
return f"Loan {data['loan_id']} is {data['status']} with balance {data['balance']}"
This is the boundary between AI reasoning and system-of-record access. If you need more actions later, add more tools: submit_payment, get_amortization_schedule, fetch_underwriting_policy.
- •Build a LangChain agent that can call those tools
Now connect the tool to an agent using a chat model. For production systems, keep the prompt tight and constrain the tool set.
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_loan_status]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=False,
)
async def answer_borrower_question(question: str) -> str:
result = await agent.arun(question)
return result
This gives you a single orchestration entry point. The agent can interpret borrower questions like “What’s my current balance?” and map them to the loan-status tool.
- •Expose the agent through FastAPI
Your public API should stay HTTP-first. The AI layer becomes just another internal dependency behind an endpoint.
from fastapi import Body
class AskRequest(BaseModel):
question: str
class AskResponse(BaseModel):
answer: str
@app.post("/assistant/ask", response_model=AskResponse)
async def ask_agent(payload: AskRequest = Body(...)):
answer = await answer_borrower_question(payload.question)
return AskResponse(answer=answer)
This is the shape you want in production:
- •FastAPI handles auth, rate limiting, validation, tracing
- •LangChain handles model/tool orchestration
- •Lending APIs remain authoritative
- •Add guardrails before shipping
Do not let an agent freely call anything in your lending platform. Add allowlists, structured outputs, and request logging.
from pydantic import BaseModel, Field
class LoanQuery(BaseModel):
loan_id: str = Field(min_length=3, max_length=64)
@tool(args_schema=LoanQuery)
async def safe_get_loan_status(loan_id: str) -> str:
"""Fetch loan status only for validated loan IDs."""
async with httpx.AsyncClient(timeout=10) as client:
resp = await client.get(f"{LENDING_API_BASE_URL}/loans/{loan_id}")
resp.raise_for_status()
data = resp.json()
return f"Loan {data['loan_id']} is {data['status']} with balance {data['balance']}"
Use validated schemas for every tool input. In lending workflows, bad input handling is not optional.
Testing the Integration
Run your FastAPI app:
uvicorn main:app --reload --port 8000
Then verify the endpoint with a simple request:
import requests
resp = requests.post(
"http://localhost:8000/assistant/ask",
json={"question": "What is the status of loan LN-10021?"}
)
print(resp.status_code)
print(resp.json())
Expected output:
200
{
"answer": "Loan LN-10021 is active with balance 18452.33"
}
If this fails, check these first:
- •
OPENAI_API_KEYis set correctly - •
LENDING_API_BASE_URLpoints to a reachable service - •Your
/loans/{loan_id}endpoint returns valid JSON - •Tool schemas match what your backend expects
Real-World Use Cases
- •Borrower support assistant
- •Answer payment questions, payoff estimates, and account status using live servicing data.
- •Underwriting copilot
- •Pull policy rules, summarize applicant documents, and route edge cases to analysts.
- •Collections workflow assistant
- •Surface delinquency status, suggest next actions, and trigger approved follow-up tasks through APIs.
The production pattern here is not “LLM talks directly to everything.” It’s FastAPI as the controlled interface layer and LangChain as the reasoning layer on top of it. That separation gives you traceability, safer execution, and a system you can actually maintain under regulatory scrutiny.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit