How to Build a loan approval Agent Using LangGraph in Python for fintech
A loan approval agent automates the boring but high-stakes parts of underwriting: collecting applicant data, validating it, scoring risk, checking policy rules, and routing borderline cases to a human. For fintech, that matters because you need speed without losing control — every decision must be explainable, auditable, and consistent with compliance rules.
Architecture
A production loan approval agent built with LangGraph usually needs these components:
- •
Input intake node
- •Normalizes application payloads from API, web form, or CRM.
- •Validates required fields like income, employment status, requested amount, and jurisdiction.
- •
Policy/rules node
- •Applies hard constraints before any model call.
- •Example: minimum credit score threshold, debt-to-income caps, residency restrictions.
- •
Risk scoring node
- •Uses an LLM or a separate scoring service to summarize risk signals.
- •Should return structured output, not free text.
- •
Decision node
- •Converts policy + score into one of:
approve,reject, ormanual_review. - •This is where you keep deterministic logic.
- •Converts policy + score into one of:
- •
Audit trail node
- •Captures every intermediate state for compliance review.
- •Needed for model governance and adverse action explanations.
- •
Human escalation path
- •Routes ambiguous or high-value loans to an underwriter.
- •Keeps the system safe when confidence is low or rules conflict.
Implementation
1) Define the state and decision schema
LangGraph works best when your state is explicit. For lending workflows, use a typed state object so every node knows what it can read and write.
from typing import TypedDict, Literal, Optional
from langgraph.graph import StateGraph, START, END
Decision = Literal["approve", "reject", "manual_review"]
class LoanState(TypedDict):
applicant_id: str
income: float
credit_score: int
debt_to_income: float
jurisdiction: str
requested_amount: float
risk_score: Optional[float]
decision: Optional[Decision]
reason: Optional[str]
This keeps the workflow deterministic at the edges. In fintech, that matters more than clever prompts.
2) Add rule checks and risk scoring nodes
Use plain Python for policy enforcement. Keep compliance rules outside the model so they are testable and easy to audit.
def validate_application(state: LoanState) -> LoanState:
required = ["applicant_id", "income", "credit_score", "debt_to_income", "jurisdiction", "requested_amount"]
for field in required:
if state.get(field) in (None, "", 0):
return {**state, "decision": "reject", "reason": f"Missing required field: {field}"}
return state
def apply_policy(state: LoanState) -> LoanState:
if state["jurisdiction"] not in {"US", "UK"}:
return {**state, "decision": "reject", "reason": "Unsupported jurisdiction"}
if state["credit_score"] < 620:
return {**state, "decision": "reject", "reason": "Credit score below minimum"}
if state["debt_to_income"] > 0.45:
return {**state, "decision": "manual_review", "reason": "DTI above threshold"}
return state
def score_risk(state: LoanState) -> LoanState:
# Replace with a real model call or external scoring service.
score = (
(850 - state["credit_score"]) / 850 * 0.5 +
min(state["debt_to_income"], 1.0) * 0.5
)
return {**state, "risk_score": round(score, 3)}
The key pattern here is separation of concerns:
- •validation is deterministic
- •policy is deterministic
- •risk scoring can be probabilistic
- •final decision remains deterministic
That structure makes audits much easier.
3) Build the graph with conditional routing
Now wire the workflow using StateGraph. The router decides whether the application gets approved automatically or escalated.
def decide(state: LoanState) -> LoanState:
if state.get("decision") in {"reject", "manual_review"}:
return state
risk = state.get("risk_score", 1.0)
if risk < 0.25 and state["requested_amount"] <= state["income"] * 2:
return {**state, "decision": "approve", "reason": "Low risk and within exposure limit"}
if risk < 0.6:
return {**state, "decision": "manual_review", "reason": "Moderate risk"}
return {**state, "decision": "reject", "reason": "High risk"}
def route_after_policy(state: LoanState) -> str:
if state.get("decision") == "reject":
return END
if state.get("decision") == "manual_review":
return END
return "score_risk"
graph = StateGraph(LoanState)
graph.add_node("validate_application", validate_application)
graph.add_node("apply_policy", apply_policy)
graph.add_node("score_risk", score_risk)
graph.add_node("decide", decide)
graph.add_edge(START, "validate_application")
graph.add_edge("validate_application", "apply_policy")
graph.add_conditional_edges("apply_policy", route_after_policy)
graph.add_edge("score_risk", "decide")
graph.add_edge("decide", END)
app = graph.compile()
This is the core LangGraph pattern:
- •
add_node()for each step - •
add_edge()for linear flow - •
add_conditional_edges()for branching decisions - •
compile()to produce an executable graph
4) Run the workflow and persist outputs for audit
In production you should store input payloads, final decisions, and intermediate reasoning artifacts in an immutable audit store.
application = {
"applicant_id": "A123",
"income": 90000,
"credit_score": 710,
"debt_to_income": 0.32,
"jurisdiction": "US",
"requested_amount": 15000,
}
result = app.invoke(application)
print(result["decision"])
print(result["reason"])
print(result.get("risk_score"))
For regulated lending flows, also persist:
- •request timestamp
- •model version or rule version
- •decision outcome
- •reason code
- •reviewer ID if manually approved later
That gives you traceability for internal audit and regulator requests.
Production Considerations
- •
Keep hard rules outside the LLM
- •Compliance logic like eligibility thresholds should be pure Python or policy engine code.
- •Don’t let a prompt decide whether a restricted jurisdiction is allowed.
- •
Log every transition
- •Capture node inputs/outputs and final decision codes.
- •You need this for adverse action notices, dispute handling, and model governance reviews.
- •
Respect data residency
- •If applicant data must stay in-region, ensure your LangGraph execution environment and any model endpoints comply.
- •Don’t send PII to external services unless legal review has cleared it.
- •
Add human-in-the-loop thresholds
- •Route borderline cases to underwriters instead of forcing auto-reject or auto-approve.
- •This reduces false positives while keeping portfolio risk under control.
Common Pitfalls
- •
Using the LLM as the final decision-maker
- •Avoid this by keeping approval logic deterministic in a dedicated node.
- •The model can summarize risk; it should not own policy enforcement.
- •
Not versioning rules and prompts
- •A loan approved today must be explainable six months later.
- •Store rule versions alongside every decision so audits can reproduce outcomes.
- •
Ignoring incomplete or dirty input data
- •Missing income fields or malformed DTI values will break downstream logic.
- •Validate at the first node and fail closed with a clear reason code.
- •
Skipping jurisdiction checks
- •Lending rules vary by country and sometimes by region inside a country.
- •Put residency and regulatory constraints before any scoring step.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit