How to Integrate FastAPI for lending with LangChain for multi-agent systems
Combining FastAPI for lending with LangChain gives you a clean way to expose lending workflows as HTTP services while letting multiple agents reason over loan data, policy rules, and next actions. In practice, that means one agent can fetch borrower context, another can assess eligibility, and a third can draft the decision summary without turning your API layer into a pile of prompt glue.
Prerequisites
- •Python 3.10+
- •A FastAPI lending service already running or available as a local module
- •LangChain installed with your model provider package
- •
uvicornfor serving FastAPI apps - •
httpxfor calling the lending API from tools - •An LLM API key configured in environment variables
- •Basic familiarity with REST endpoints and agent/tool patterns
Install the core packages:
pip install fastapi uvicorn httpx langchain langchain-openai pydantic
Integration Steps
- •Expose lending operations through FastAPI
Your lending service should expose stable endpoints for common workflows like application lookup, eligibility checks, and decisioning. Keep the contract explicit so agents can call it deterministically.
# lending_api.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
app = FastAPI(title="Lending API")
class LoanApplication(BaseModel):
applicant_id: str
amount: float
income: float
debt: float
credit_score: int
@app.post("/lending/eligibility")
def check_eligibility(app: LoanApplication):
dti = app.debt / max(app.income, 1)
eligible = app.credit_score >= 680 and dti < 0.4 and app.amount <= app.income * 5
return {
"applicant_id": app.applicant_id,
"eligible": eligible,
"reason": None if eligible else "Failed credit score, DTI, or loan-to-income rule",
"dti": round(dti, 3),
}
Run it with:
uvicorn lending_api:app --reload --port 8000
- •Wrap the FastAPI endpoint as a LangChain tool
LangChain agents should not know about your internal service implementation. Wrap the HTTP call in a tool so the agent can invoke it safely.
# tools.py
import httpx
from pydantic import BaseModel, Field
from langchain_core.tools import tool
class EligibilityInput(BaseModel):
applicant_id: str = Field(..., description="Unique borrower ID")
amount: float = Field(..., description="Requested loan amount")
income: float = Field(..., description="Monthly or annual income")
debt: float = Field(..., description="Existing debt")
credit_score: int = Field(..., description="Borrower's credit score")
@tool("check_lending_eligibility", args_schema=EligibilityInput)
def check_lending_eligibility(applicant_id: str, amount: float, income: float, debt: float, credit_score: int):
payload = {
"applicant_id": applicant_id,
"amount": amount,
"income": income,
"debt": debt,
"credit_score": credit_score,
}
response = httpx.post("http://localhost:8000/lending/eligibility", json=payload, timeout=10)
response.raise_for_status()
return response.json()
This is the boundary you want:
- •FastAPI owns validation and business rules
- •LangChain owns orchestration and reasoning
- •The tool is just an adapter
- •Create a multi-agent workflow around the lending tool
For multi-agent systems, split responsibilities by role. One agent gathers facts, another calls the lending tool, and a supervisor agent produces the final answer.
# agent_system.py
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from tools import check_lending_eligibility
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a lending operations assistant. Use tools when needed."),
("human", "{input}"),
])
tools = [check_lending_eligibility]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({
"input": (
"Check whether applicant A123 qualifies for a $50k loan. "
"Income is 12000, debt is 3000, credit score is 710."
)
})
print(result["output"])
If you need actual multi-agent coordination instead of a single tool-calling agent:
- •use one agent for retrieval of borrower context
- •use one agent for policy interpretation
- •use one agent for final recommendation generation
The important part is that only one component should be allowed to hit the lending API directly.
- •Add structured outputs for downstream systems
Loan workflows usually need machine-readable results for underwriting queues or case management systems. Use Pydantic models so your agent output stays consistent.
# structured_result.py
from pydantic import BaseModel
class LendingDecision(BaseModel):
applicant_id: str
eligible: bool
reason: str | None = None
dti: float
# Example post-processing after tool call:
tool_result = {
"applicant_id": "A123",
"eligible": True,
"reason": None,
"dti": 0.25,
}
decision = LendingDecision(**tool_result)
print(decision.model_dump())
This pattern matters when another service consumes the result:
- •case management system
- •underwriting queue
- •audit log pipeline
- •Add retries and timeout handling around the API call
Agents fail in production when backend calls hang or return transient errors. Keep retries outside the model loop so you do not burn tokens on infrastructure noise.
# resilient_tools.py
import httpx
from tenacity import retry, stop_after_attempt, wait_fixed
@retry(stop=stop_after_attempt(3), wait=wait_fixed(1))
def call_eligibility_service(payload: dict):
with httpx.Client(timeout=5) as client:
resp = client.post("http://localhost:8000/lending/eligibility", json=payload)
resp.raise_for_status()
return resp.json()
Then wire this into your LangChain tool instead of calling httpx.post() directly.
Testing the Integration
Use a direct script to verify both sides work together before introducing more agents.
# test_integration.py
from tools import check_lending_eligibility
result = check_lending_eligibility.invoke({
"applicant_id": "A123",
"amount": 50000,
"income": 12000,
"debt": 3000,
"credit_score": 710,
})
print(result)
Expected output:
{
'applicant_id': 'A123',
'eligible': True,
'reason': None,
'dti': 0.25
}
If that passes:
- •FastAPI is accepting and validating requests correctly
- •LangChain is invoking the tool correctly
- •Your payload contract is stable enough for orchestration
Real-World Use Cases
- •
Loan pre-screening assistant
An intake agent collects borrower details while a policy agent checks eligibility against your FastAPI lending rules. - •
Underwriting copilot
One agent retrieves application data from your APIs while another summarizes risk signals and drafts an underwriting recommendation. - •
Collections triage system
Agents classify delinquent accounts, fetch repayment history through FastAPI endpoints, and generate next-best-action suggestions for human reviewers.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit