How to Build a loan approval Agent Using LangChain in Python for retail banking
A loan approval agent in retail banking takes a borrower’s application, checks it against policy, pulls the right data, and produces a decision recommendation with an audit trail. It matters because loan ops teams need faster turnaround without losing control over compliance, explainability, and consistent underwriting.
Architecture
- •
Application intake layer
- •Accepts structured inputs like income, employment status, debt obligations, loan amount, and jurisdiction.
- •Normalizes fields before they reach the agent.
- •
Policy retrieval layer
- •Pulls underwriting rules, product constraints, and regulatory notes from a controlled knowledge base.
- •Use this for things like DTI thresholds, minimum credit score bands, and exceptions policy.
- •
Decisioning tool layer
- •Exposes deterministic tools for affordability checks, risk scoring, and document validation.
- •The LLM should recommend; tools should calculate.
- •
LangChain agent orchestrator
- •Uses
create_react_agentor a tool-calling pattern to decide which checks to run. - •Keeps the reasoning trace available for audit.
- •Uses
- •
Audit and logging layer
- •Stores prompts, tool calls, retrieved policy snippets, outputs, and final decisions.
- •Required for model governance and dispute handling.
- •
Human review handoff
- •Routes borderline or high-risk cases to an underwriter.
- •Prevents the agent from making unsupported approvals or denials.
Implementation
- •
Define deterministic banking tools first
Don’t ask the model to calculate debt-to-income or validate simple eligibility rules. Put those in Python functions and expose them as LangChain tools.
from typing import Dict
from langchain_core.tools import tool
@tool
def calculate_dti(monthly_debt: float, monthly_income: float) -> float:
"""Calculate debt-to-income ratio as a percentage."""
if monthly_income <= 0:
raise ValueError("monthly_income must be greater than zero")
return round((monthly_debt / monthly_income) * 100, 2)
@tool
def check_minimum_income(monthly_income: float, min_income: float = 2500.0) -> bool:
"""Check whether applicant meets minimum income threshold."""
return monthly_income >= min_income
@tool
def verify_basic_eligibility(citizenship_status: str, residency_status: str) -> bool:
"""Basic residency/eligibility check for retail lending policy."""
allowed_residency = {"resident", "permanent_resident", "citizen"}
return citizenship_status.lower() in allowed_residency or residency_status.lower() in allowed_residency
- •
Load policy context with retrieval
For retail banking you need versioned policy documents. Use a vector store backed by approved internal policy text so the agent can cite the exact rule it used.
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_core.documents import Document
policy_docs = [
Document(page_content="Personal loan DTI must be below 40% unless exception approved."),
Document(page_content="Applicants with active bankruptcy are not eligible."),
Document(page_content="All decisions must retain an audit record including policy version."),
]
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(policy_docs, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 2})
- •
Build the agent with LangChain
Use
ChatOpenAI, bind your tools, and create a ReAct-style agent. The model should use retrieved policy context plus tool outputs to produce a recommendation.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import create_react_agent, AgentExecutor
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a retail banking loan approval assistant. "
"Use only approved policy context and tool outputs. "
"Do not invent underwriting rules. "
"Return one of: APPROVE_RECOMMENDATION, DECLINE_RECOMMENDATION, REVIEW_REQUIRED."),
("human",
"Applicant data: {application}\n\n"
"Policy context:\n{policy_context}\n\n"
"Use the available tools to assess eligibility.")
])
tools = [calculate_dti, check_minimum_income, verify_basic_eligibility]
agent = create_react_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=False)
application = {
"monthly_debt": 1200,
"monthly_income": 4000,
"citizenship_status": "resident",
"residency_status": "resident",
}
policy_context = "\n".join([doc.page_content for doc in retriever.get_relevant_documents("personal loan eligibility")])
result = executor.invoke({
"application": application,
"policy_context": policy_context,
})
print(result["output"])
- •
Wrap the decision in a controlled workflow
In production you want a final rule layer that converts model output into bank-safe outcomes. If the result is ambiguous or violates policy constraints, route to manual review instead of auto-decisioning.
def finalize_decision(agent_output: str, dti: float) -> str:
if "DECLINE_RECOMMENDATION" in agent_output:
return "DECLINE"
if dti >= 40:
return "REVIEW_REQUIRED"
if "APPROVE_RECOMMENDATION" in agent_output:
return "APPROVE"
return "REVIEW_REQUIRED"
dti_value = calculate_dti.invoke({"monthly_debt": 1200, "monthly_income": 4000})
decision = finalize_decision(result["output"], dti_value)
print({"dti": dti_value, "decision": decision})
Production Considerations
- •
Keep customer data inside your residency boundary
- •If your bank requires EU-only or country-specific processing, run embeddings, vector stores, and LLM endpoints in-region.
- •Do not ship raw PII to third-party services without legal approval and data processing agreements.
- •
Log every decision path
- •Store prompt version, retrieved policy text IDs, tool inputs/outputs, model version, and final recommendation.
- •This is what compliance will ask for when a customer disputes a decline.
- •
Add hard guardrails before auto-approval
- •Never let the LLM approve on its own when bankruptcy flags, fraud signals, or missing income verification exist.
- •Put those checks in deterministic code before the agent returns a decision.
- •
Monitor drift by segment
- •Track approval rates by branch region, income band, employment type, and channel.
- •Retail lending models can degrade quietly if distribution shifts after product changes or macroeconomic events.
Common Pitfalls
- •
Letting the model do arithmetic
- •Mistake: asking the LLM to compute DTI or affordability from raw numbers.
- •Fix: use Python tools for all calculations and let the model interpret only after those results are available.
- •
Embedding unredacted customer data into prompts
- •Mistake: passing full application records with SSNs, account numbers, or free-text notes into LangChain messages.
- •Fix: redact sensitive fields first and pass only what is needed for underwriting logic.
- •
Using vague policy sources
- •Mistake: retrieving generic banking text or outdated PDFs without version control.
- •Fix: index only approved policy documents with effective dates and maintain immutable versions for auditability.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit