How to Integrate FastAPI for lending with LangChain for AI agents
FastAPI for lending gives you a clean API layer for loan workflows: application intake, eligibility checks, pricing, underwriting, and status updates. LangChain adds the agent layer on top, so your system can interpret borrower requests, call lending APIs as tools, and return structured decisions or next actions.
The useful pattern here is simple: FastAPI exposes lending capabilities as HTTP endpoints, and LangChain wraps those endpoints as tools an AI agent can call. That lets you build borrower-facing assistants that do more than chat — they can fetch loan status, pre-qualify applicants, summarize documents, and route cases into the right workflow.
Prerequisites
- •Python 3.10+
- •A FastAPI lending service running locally or in your environment
- •
langchain,langchain-openai,fastapi,uvicorn, andhttpxinstalled - •An OpenAI API key set in your environment if you use an OpenAI-backed agent
- •Basic familiarity with REST APIs and async Python
Install the packages:
pip install fastapi uvicorn httpx langchain langchain-openai pydantic
Integration Steps
- •Expose lending operations with FastAPI
Start with a small lending API that has one or two concrete endpoints. In production, these would map to your underwriting or servicing backend.
from fastapi import FastAPI
from pydantic import BaseModel
from typing import Literal
app = FastAPI(title="Lending API")
class LoanApplication(BaseModel):
applicant_id: str
income: float
requested_amount: float
term_months: int
class PrequalResponse(BaseModel):
eligible: bool
max_amount: float
reason: str
@app.post("/loans/prequalify", response_model=PrequalResponse)
def prequalify(application: LoanApplication):
debt_to_income = application.requested_amount / max(application.income, 1)
if debt_to_income > 0.4:
return PrequalResponse(
eligible=False,
max_amount=application.income * 0.35,
reason="Requested amount exceeds policy threshold."
)
return PrequalResponse(
eligible=True,
max_amount=application.income * 0.5,
reason="Applicant passes basic prequalification."
)
Run it with:
uvicorn lending_api:app --reload --port 8000
- •Wrap the FastAPI endpoint as a LangChain tool
LangChain agents need tools they can invoke. The cleanest way is to create a Python function that calls your FastAPI endpoint over HTTP, then expose that function as a LangChain tool.
import httpx
from langchain_core.tools import tool
LENDING_API_BASE_URL = "http://localhost:8000"
@tool
def prequalify_loan(applicant_id: str, income: float, requested_amount: float, term_months: int) -> dict:
"""Call the lending API to prequalify a loan application."""
payload = {
"applicant_id": applicant_id,
"income": income,
"requested_amount": requested_amount,
"term_months": term_months,
}
response = httpx.post(f"{LENDING_API_BASE_URL}/loans/prequalify", json=payload, timeout=10.0)
response.raise_for_status()
return response.json()
This is the bridge between systems. The agent never talks to your underwriting logic directly; it only calls the tool.
- •Build a LangChain agent that uses the lending tool
Now create an agent that can decide when to call the lending tool based on user input.
import os
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "")
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [prequalify_loan]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
result = agent.run(
"Prequalify applicant A123. Income is 120000, requested amount is 40000, term is 36 months."
)
print(result)
The important part here is AgentType.OPENAI_FUNCTIONS. That tells LangChain to let the model choose tools by function calling instead of free-form guessing.
- •Add structured responses for production use
In lending systems, raw text output is not enough. You want structured outputs that downstream services can parse for decisioning or case management.
from pydantic import BaseModel
class LendingDecision(BaseModel):
applicant_id: str
eligible: bool
max_amount: float
summary: str
def make_decision(applicant_id: str, income: float, requested_amount: float) -> LendingDecision:
result = prequalify_loan.invoke({
"applicant_id": applicant_id,
"income": income,
"requested_amount": requested_amount,
"term_months": 36,
})
return LendingDecision(
applicant_id=applicant_id,
eligible=result["eligible"],
max_amount=result["max_amount"],
summary=result["reason"],
)
If you are building a real workflow engine around this, keep the LLM responsible for interpretation and explanation, while deterministic services handle policy checks.
- •Chain multiple lending actions into one agent flow
Once the basic integration works, you can add more endpoints like /loans/status, /loans/documents, or /loans/submit. Then let the agent choose which tool to use.
@tool
def get_loan_status(loan_id: str) -> dict:
"""Fetch current loan status from the lending API."""
response = httpx.get(f"{LENDING_API_BASE_URL}/loans/{loan_id}/status", timeout=10.0)
response.raise_for_status()
return response.json()
You can register both tools in one agent:
tools = [prequalify_loan, get_loan_status]
That gives you a single conversational interface over multiple lending operations.
Testing the Integration
Use this quick test to verify the full path works end to end:
if __name__ == "__main__":
print(prequalify_loan.invoke({
"applicant_id": "A123",
"income": 120000,
"requested_amount": 40000,
"term_months": 36,
}))
Expected output:
{
"eligible": true,
"max_amount": 60000.0,
"reason": "Applicant passes basic prequalification."
}
If you want to test through the agent instead of calling the tool directly, run a prompt like:
print(agent.run("Check whether applicant A123 qualifies for a $40k loan on $120k income over 36 months."))
Expected behavior:
- •The agent calls
prequalify_loan - •The FastAPI service returns JSON
- •The agent summarizes the result in plain English
Real-World Use Cases
- •Borrower assistant: Let customers ask “Can I qualify?” or “What’s my loan status?” and have the agent call FastAPI-backed servicing endpoints.
- •Loan ops copilot: Help internal teams triage applications by pulling data from FastAPI services and summarizing missing documents or policy exceptions.
- •Collections workflow: Use LangChain agents to classify delinquency cases and trigger FastAPI actions like payment plan setup or escalation routing.
The pattern scales well because each side keeps its job. FastAPI owns business operations and policy enforcement; LangChain owns orchestration and language understanding.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit