How to Build a loan approval Agent Using LangChain in Python for fintech
A loan approval agent automates the first pass of a lending decision: it collects applicant data, checks policy rules, scores risk, and produces a recommendation with an audit trail. For fintech, this matters because you need faster turnaround without losing control over compliance, explainability, and consistent underwriting decisions.
Architecture
- •
Input layer
- •Accepts structured applicant data: income, employment status, existing debt, credit score, loan amount, and jurisdiction.
- •Validates fields before the agent sees them.
- •
Policy engine
- •Encodes hard rules like minimum credit score, debt-to-income thresholds, and restricted geographies.
- •This should be deterministic, not LLM-driven.
- •
LangChain decision agent
- •Uses
ChatOpenAIplus aStructuredToolor direct function call pattern to interpret the application and generate a recommendation. - •Produces a structured output, not free-form prose.
- •Uses
- •
Risk scoring layer
- •Combines business rules with model-assisted reasoning.
- •Returns
approve,review, orrejectwith reasons.
- •
Audit and logging layer
- •Stores prompts, tool calls, outputs, timestamps, and policy decisions.
- •Required for model governance and regulator review.
- •
Human review fallback
- •Routes borderline cases to an underwriter.
- •Prevents the agent from making final decisions where policy requires manual approval.
Implementation
1) Define the application schema and underwriting rules
Keep your loan inputs explicit. Fintech systems fail when they rely on messy free text instead of typed fields.
from typing import Literal
from pydantic import BaseModel, Field
class LoanApplication(BaseModel):
applicant_id: str
country: str
annual_income: float = Field(gt=0)
monthly_debt: float = Field(ge=0)
credit_score: int = Field(ge=300, le=850)
requested_amount: float = Field(gt=0)
employment_status: Literal["employed", "self_employed", "unemployed"]
purpose: str
class LoanDecision(BaseModel):
decision: Literal["approve", "review", "reject"]
risk_band: Literal["low", "medium", "high"]
reason: str
dti_ratio: float
2) Build deterministic policy checks first
Do not ask the LLM to decide everything. Use code for hard constraints like jurisdiction blocks and minimum credit policy.
def apply_policy_rules(app: LoanApplication) -> tuple[bool, list[str]]:
reasons = []
if app.country not in {"US", "CA", "GB"}:
reasons.append("Unsupported jurisdiction for current lending policy.")
if app.credit_score < 620:
reasons.append("Credit score below minimum threshold.")
monthly_income = app.annual_income / 12
dti = app.monthly_debt / monthly_income if monthly_income else 1.0
if dti > 0.45:
reasons.append("Debt-to-income ratio above allowed limit.")
return len(reasons) == 0, reasons
3) Use LangChain for the judgment layer with structured output
This pattern uses ChatOpenAI and with_structured_output() so the model returns a validated object. That is the right shape for production systems that need predictable outputs.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a loan underwriting assistant for a fintech lender. "
"Follow policy rules strictly. Return only structured output."),
("human",
"Application:\n{application}\n\nPolicy flags:\n{policy_flags}\n"
"Decide approve/review/reject with concise reasoning.")
])
structured_llm = llm.with_structured_output(LoanDecision)
def evaluate_application(app: LoanApplication) -> LoanDecision:
ok, reasons = apply_policy_rules(app)
monthly_income = app.annual_income / 12
dti_ratio = app.monthly_debt / monthly_income
if not ok:
return LoanDecision(
decision="reject" if any("below minimum" in r.lower() for r in reasons) else "review",
risk_band="high",
reason="; ".join(reasons),
dti_ratio=dti_ratio,
)
chain = prompt | structured_llm
result = chain.invoke({
"application": app.model_dump(),
"policy_flags": [],
})
# Ensure computed values are preserved in the final record.
return LoanDecision(
decision=result.decision,
risk_band=result.risk_band,
reason=result.reason,
dti_ratio=dti_ratio,
)
4) Wrap it in an API-friendly function with audit logging
The agent should emit an immutable decision record. In regulated lending, you need to reconstruct why a decision happened months later.
import json
from datetime import datetime
def audit_log(app: LoanApplication, decision: LoanDecision) -> dict:
record = {
"timestamp": datetime.utcnow().isoformat(),
"applicant_id": app.applicant_id,
"input": app.model_dump(),
"decision": decision.model_dump(),
"model": "gpt-4o-mini",
"version": "loan-agent-v1",
}
print(json.dumps(record))
return record
if __name__ == "__main__":
application = LoanApplication(
applicant_id="A123",
country="US",
annual_income=120000,
monthly_debt=1500,
credit_score=710,
requested_amount=25000,
employment_status="employed",
purpose="debt_consolidation",
)
decision = evaluate_application(application)
audit_log(application, decision)
Production Considerations
- •
Deployment
- •Keep policy checks inside your service boundary.
- •Do not outsource core underwriting logic to prompts alone.
- •Put the LLM behind a timeout and fallback path so applications do not stall.
- •
Monitoring
- •Track approval rate by segment, override rate by underwriters, and rejection reasons.
- •Watch for drift in credit score bands, DTI distributions, and geography-specific outcomes.
- •Log every prompt/response pair with PII redaction where required.
- •
Guardrails
- •Block unsupported countries before model invocation to respect data residency and lending restrictions.
- •Use schema validation on all inputs and outputs.
- •Add human review thresholds for borderline cases like medium-risk applications.
- •
Compliance
- •Store audit logs in your regulated region.
- •Version prompts and underwriting policies together.
- •Make adverse action reasons explainable in plain language for customer communications.
Common Pitfalls
- •
Letting the LLM make hard policy decisions
- •Avoid this by encoding eligibility rules in Python first.
- •The model should handle interpretation and summarization, not legal thresholds.
- •
Returning free-form text instead of structured decisions
- •Avoid this by using
with_structured_output()and Pydantic models. - •Free text makes downstream automation brittle and hard to audit.
- •Avoid this by using
- •
Ignoring jurisdiction and residency constraints
- •Avoid this by checking country/region before any external model call.
- •Fintech systems often need region-specific processing and storage controls.
If you build it this way, LangChain becomes the orchestration layer for underwriting judgment while Python owns the actual risk controls. That split is what keeps a loan approval agent usable in production fintech instead of becoming a demo that breaks under compliance review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit