How to Build a loan approval Agent Using LangChain in Python for lending
A loan approval agent automates the first pass of lending decisions: it ingests applicant data, checks policy rules, scores risk, and produces a decision with a reason code trail. For lenders, that matters because it reduces manual review load, keeps decisions consistent, and creates an auditable record for compliance teams.
Architecture
- •
Application intake layer
- •Normalizes borrower data from forms, APIs, or CRM systems.
- •Validates required fields like income, debt obligations, employment status, and consent.
- •
Policy engine
- •Encodes hard lending rules such as minimum credit score, debt-to-income thresholds, residency restrictions, and product eligibility.
- •Returns deterministic pass/fail outcomes before any model is consulted.
- •
Risk reasoning layer
- •Uses LangChain to call an LLM for structured assessment of borderline cases.
- •Produces a recommendation with explicit rationale and confidence signals.
- •
Audit and decision store
- •Persists every input, rule outcome, model output, and final decision.
- •Supports later review by compliance, operations, and regulators.
- •
Human review queue
- •Routes ambiguous or high-risk applications to underwriters.
- •Prevents the agent from making unsupported approvals on incomplete evidence.
Implementation
1) Define the application schema and policy checks
Keep the first pass deterministic. In lending, hard rules should not depend on model behavior.
from pydantic import BaseModel, Field
from typing import Literal
class LoanApplication(BaseModel):
applicant_id: str
annual_income: float = Field(gt=0)
monthly_debt: float = Field(ge=0)
credit_score: int = Field(ge=300, le=850)
employment_years: float = Field(ge=0)
country: str
loan_amount: float = Field(gt=0)
class Decision(BaseModel):
status: Literal["approve", "deny", "review"]
reason: str
risk_band: Literal["low", "medium", "high"]
def apply_policy(app: LoanApplication) -> tuple[bool, list[str]]:
reasons = []
dti = app.monthly_debt * 12 / app.annual_income
if app.credit_score < 620:
reasons.append("credit_score_below_minimum")
if dti > 0.45:
reasons.append("debt_to_income_above_threshold")
if app.employment_years < 1:
reasons.append("insufficient_employment_history")
if app.country not in {"US", "CA", "GB"}:
reasons.append("unsupported_jurisdiction")
return len(reasons) == 0, reasons
2) Add LangChain for structured underwriting analysis
Use ChatOpenAI plus PydanticOutputParser so the model returns machine-readable output. That keeps the agent usable in a workflow engine instead of trapping logic inside free text.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import PydanticOutputParser
parser = PydanticOutputParser(pydantic_object=Decision)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a lending underwriting assistant. "
"Return only valid JSON matching the schema. "
"Do not approve applications that violate policy constraints."),
("human",
"Applicant data:\n{application}\n\n"
"Policy check result:\n{policy_result}\n\n"
"{format_instructions}")
])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = prompt | llm | parser
3) Orchestrate deterministic rules first, then model review
The pattern is simple: reject obvious failures immediately, approve only low-risk clean cases if your policy allows it, and send everything else to the LLM for structured review.
def underwrite(app: LoanApplication) -> Decision:
passed_policy, reasons = apply_policy(app)
if not passed_policy:
return Decision(
status="deny",
reason=";".join(reasons),
risk_band="high",
)
policy_result = {
"passed": True,
"dti": round(app.monthly_debt * 12 / app.annual_income, 4),
"credit_score": app.credit_score,
"employment_years": app.employment_years,
"country": app.country,
}
result = chain.invoke({
"application": app.model_dump_json(indent=2),
"policy_result": str(policy_result),
"format_instructions": parser.get_format_instructions(),
})
return result
4) Run it end-to-end
if __name__ == "__main__":
application = LoanApplication(
applicant_id="A-10021",
annual_income=120000,
monthly_debt=1800,
credit_score=710,
employment_years=4,
country="US",
loan_amount=25000,
)
decision = underwrite(application)
print(decision.model_dump())
This is the right shape for a production lending system because the deterministic layer enforces policy and the LLM layer handles nuanced review language. If you need tool use later—say pulling bureau data or bank statements—you can extend this with LangChain tools and a RunnableSequence, but do not let tool access bypass policy gates.
Production Considerations
- •
Auditability
- •Store raw inputs, policy outcomes, prompt versions, model version, output JSON, and final disposition.
- •For regulated lending workflows, you need reproducible decisions months later.
- •
Data residency
- •Keep applicant PII inside approved regions and use vendor configurations that match your jurisdictional requirements.
- •If your lender operates across countries, split processing by region instead of sending all records to one shared inference endpoint.
- •
Guardrails
- •Block unsupported recommendations with schema validation and hard policy checks before model invocation.
- •Add refusal logic for missing consent flags or incomplete KYC data.
- •
Monitoring
- •Track approval rate drift, override rate by underwriters, false positives on denials, latency per decision path, and token usage.
- •Slice metrics by product type and geography; lending bias often shows up in specific segments first.
Common Pitfalls
- •
Letting the model make direct approval decisions
- •Don’t ask the LLM to “decide” from scratch.
- •Use it for structured reasoning after deterministic eligibility checks have already run.
- •
Skipping audit logs
- •A plain text answer is not enough.
- •Persist structured outputs with timestamps and versioned prompts so compliance can reconstruct why a decision happened.
- •
Ignoring jurisdiction-specific rules
- •Lending policies vary by country and product line.
- •Encode region-specific thresholds in code or configuration rather than burying them in prompts.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit