How to Build a loan approval Agent Using LangGraph in Python for lending
A loan approval agent automates the first-pass decisioning workflow for lending: it ingests an application, validates required fields, checks policy rules, scores risk, and returns an approve/review/decline recommendation with an audit trail. For lenders, this matters because it reduces manual review load, standardizes decisions, and gives compliance teams a traceable path from input data to outcome.
Architecture
- •
Application intake
- •Accepts borrower profile, loan amount, income, employment status, requested term, and consent flags.
- •Normalizes raw payloads into a typed internal state.
- •
Policy validation node
- •Checks hard rules like missing consent, unsupported geography, age restrictions, or incomplete KYC.
- •Short-circuits the flow when mandatory conditions fail.
- •
Risk scoring node
- •Computes a simple decision score using debt-to-income ratio, credit band, and loan size.
- •Can be replaced with a model call or internal pricing engine later.
- •
Decision node
- •Converts policy and risk outputs into
approve,manual_review, ordecline. - •Keeps the logic deterministic for auditability.
- •Converts policy and risk outputs into
- •
Audit trail
- •Stores every intermediate state update and final decision.
- •Critical for model governance, adverse action review, and regulator questions.
- •
Human review handoff
- •Routes borderline cases to underwriting instead of forcing automation.
- •Keeps the system aligned with lending policy and fair lending controls.
Implementation
1. Define the graph state and decision functions
Use a typed state so every node reads and writes predictable fields. In lending systems, that predictability matters more than clever prompts.
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
Decision = Literal["approve", "manual_review", "decline"]
class LoanState(TypedDict):
applicant_name: str
income: float
monthly_debt: float
credit_score: int
requested_amount: float
consent: bool
jurisdiction: str
policy_ok: bool
risk_score: float
decision: Decision
reason: str
def validate_policy(state: LoanState) -> dict:
if not state["consent"]:
return {"policy_ok": False, "decision": "decline", "reason": "Missing applicant consent"}
if state["jurisdiction"] not in {"US", "CA"}:
return {"policy_ok": False, "decision": "decline", "reason": "Unsupported jurisdiction"}
return {"policy_ok": True}
def score_risk(state: LoanState) -> dict:
dti = state["monthly_debt"] / max(state["income"], 1)
base = 100.0
base -= dti * 50
base -= max(0, (700 - state["credit_score"]) / 10)
base -= state["requested_amount"] / max(state["income"], 1) * 10
return {"risk_score": round(base, 2)}
def decide(state: LoanState) -> dict:
if not state.get("policy_ok", False):
return {}
score = state["risk_score"]
if score >= 75:
return {"decision": "approve", "reason": "Meets policy and risk threshold"}
if score >= 55:
return {"decision": "manual_review", "reason": "Borderline risk requires underwriting review"}
return {"decision": "decline", "reason": "Risk score below threshold"}
2. Build the LangGraph workflow with conditional routing
StateGraph is the right abstraction here because lending workflows are not linear. Some applications should stop immediately on policy failure; others should continue to scoring.
from langgraph.graph import StateGraph, START, END
def route_after_policy(state: LoanState) -> str:
if not state["policy_ok"]:
return END
return "score_risk"
workflow = StateGraph(LoanState)
workflow.add_node("validate_policy", validate_policy)
workflow.add_node("score_risk", score_risk)
workflow.add_node("decide", decide)
workflow.add_edge(START, "validate_policy")
workflow.add_conditional_edges("validate_policy", route_after_policy)
workflow.add_edge("score_risk", "decide")
workflow.add_edge("decide", END)
app = workflow.compile()
3. Run an application through the graph
This is the pattern you want in production services: receive request payloads at the API layer, pass them into app.invoke(), then persist both input and output for audit.
application = {
"applicant_name": "Jane Doe",
"income": 8500.0,
"monthly_debt": 1200.0,
"credit_score": 721,
"requested_amount": 15000.0,
"consent": True,
"jurisdiction": "US",
}
initial_state: LoanState = {
**application,
"policy_ok": False,
"risk_score": 0.0,
"decision": "manual_review",
"reason": "",
}
result = app.invoke(initial_state)
print(result["decision"])
print(result["reason"])
print(result["risk_score"])
4. Add audit logging around execution
For lending you need an immutable record of what happened. At minimum capture request ID, inputs used for decisioning, node outputs, final outcome, and timestamp.
import json
from datetime import datetime
def run_with_audit(payload: dict) -> dict:
state = {
**payload,
"policy_ok": False,
"risk_score": 0.0,
"decision": "manual_review",
"reason": "",
}
output = app.invoke(state)
audit_event = {
"timestamp_utc": datetime.utcnow().isoformat(),
"request_id": payload.get("request_id"),
"input": payload,
"output": output,
"system": "loan_approval_agent",
}
print(json.dumps(audit_event))
return output
Production Considerations
- •
Compliance controls
- •Keep hard decline rules deterministic and versioned.
- •Separate policy logic from any ML-based scoring so you can explain adverse actions cleanly.
- •Log rule versions used per decision.
- •
Data residency
- •Store borrower PII in-region according to your operating jurisdiction.
- •If you call external models or APIs, ensure they do not move data across borders without approval.
- •Redact sensitive fields before telemetry export.
- •
Monitoring
- •Track approval rate by segment, manual review rate, decline reasons, and policy-fail counts.
- •Watch for drift in credit-score bands or DTI distribution.
- •Alert on sudden changes in routing to human review; that often signals upstream data issues.
- •
Guardrails
- •Never let free-form model output directly approve or decline loans without policy checks.
- •Enforce minimum input completeness before scoring.
- •Keep protected-class attributes out of the decision path unless your compliance team has explicitly approved fairness testing use cases.
Common Pitfalls
- •
Using one big LLM prompt for the whole decision
- •This creates weak auditability and inconsistent outcomes.
- •Fix it by splitting validation, scoring, and final decision into separate nodes with explicit inputs and outputs.
- •
Skipping hard-stop compliance checks
- •If consent is missing or jurisdiction is unsupported, the workflow must stop immediately.
- •Fix it with conditional edges that route directly to
ENDon policy failure.
- •
Not persisting intermediate state
- •Regulators will ask why a borrower was declined or reviewed manually.
- •Fix it by storing node outputs alongside final decisions so you can reconstruct the path through the graph later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit