How to Build a loan approval Agent Using LangGraph in Python for banking

By Cyprian AaronsUpdated 2026-04-21
loan-approvallanggraphpythonbanking

A loan approval agent automates the first pass of a credit decision: it collects applicant data, checks policy rules, evaluates risk signals, and routes borderline cases to a human underwriter. For banking, that matters because you need speed without losing control — every decision must be explainable, auditable, and consistent with credit policy.

Architecture

  • Input normalization layer

    • Converts raw application payloads into a typed schema.
    • Validates required fields like income, employment status, requested amount, and jurisdiction.
  • Policy evaluation node

    • Applies hard rules such as minimum age, debt-to-income thresholds, and prohibited geographies.
    • Produces deterministic outcomes for compliance-heavy checks.
  • Risk scoring node

    • Uses an internal model or external scoring service to estimate default risk.
    • Keeps model output separate from policy output so you can audit both.
  • Decision router

    • Chooses between approve, reject, or manual_review.
    • Ensures borderline or incomplete cases are escalated to a human.
  • Audit trail store

    • Persists every state transition and final decision.
    • Captures inputs, intermediate outputs, timestamps, and policy reasons.
  • Human review handoff

    • Packages the case for underwriting when confidence is low or policy requires it.
    • Preserves traceability for compliance teams.

Implementation

1) Define the application state

Use a typed state object so every node in the graph shares the same contract. In banking workflows, this is where you keep both the raw application data and the derived decision fields.

from typing import TypedDict, Literal, Optional
from langgraph.graph import StateGraph, START, END

class LoanState(TypedDict):
    applicant_name: str
    annual_income: float
    requested_amount: float
    credit_score: int
    debt_to_income: float
    jurisdiction: str
    risk_score: Optional[float]
    policy_passed: Optional[bool]
    decision: Optional[Literal["approve", "reject", "manual_review"]]
    reason: Optional[str]

2) Build deterministic policy and scoring nodes

Keep policy checks explicit. Do not bury compliance logic inside an LLM prompt; that creates audit problems and inconsistent outcomes.

def policy_check(state: LoanState) -> dict:
    blocked_jurisdictions = {"XK", "IR", "KP"}
    if state["jurisdiction"] in blocked_jurisdictions:
        return {
            "policy_passed": False,
            "decision": "reject",
            "reason": f"Jurisdiction {state['jurisdiction']} is restricted by policy",
        }

    if state["debt_to_income"] > 0.43:
        return {
            "policy_passed": False,
            "decision": "manual_review",
            "reason": "Debt-to-income ratio exceeds threshold",
        }

    return {"policy_passed": True}

def risk_score(state: LoanState) -> dict:
    score = 0.0
    score += (850 - state["credit_score"]) / 850 * 0.5
    score += min(state["debt_to_income"], 1.0) * 0.3
    score += max(0.0, (100000 - state["annual_income"]) / 100000) * 0.2
    return {"risk_score": round(score, 4)}

3) Add routing logic and compile the graph

add_conditional_edges() is the key LangGraph pattern here. It lets you route decisions based on state after policy and scoring have run.

def decide(state: LoanState) -> str:
    if state.get("decision") in {"reject", "manual_review"}:
        return END

    risk = state.get("risk_score", 1.0)
    if risk < 0.25:
        return "approve"
    if risk < 0.45:
        return "manual_review"
    return "reject"

def final_decision(state: LoanState) -> dict:
    risk = state.get("risk_score", 1.0)
    if risk < 0.25:
        return {"decision": "approve", "reason": "Risk within approved threshold"}
    if risk < 0.45:
        return {"decision": "manual_review", "reason": "Risk requires underwriter review"}
    return {"decision": "reject", "reason": "Risk exceeds acceptance threshold"}

graph = StateGraph(LoanState)
graph.add_node("policy_check", policy_check)
graph.add_node("risk_score", risk_score)
graph.add_node("final_decision", final_decision)

graph.add_edge(START, "policy_check")
graph.add_edge("policy_check", "risk_score")
graph.add_edge("risk_score", "final_decision")
graph.add_conditional_edges(
    "final_decision",
    decide,
    {
        "approve": END,
        "manual_review": END,
        "reject": END,
        END: END,
    },
)

app = graph.compile()

4) Run the workflow with a real application payload

This is what your service layer calls after KYC and document extraction have already populated structured fields.

application = {
    "applicant_name": "Amina Patel",
    "annual_income": 92000.0,
    "requested_amount": 18000.0,
    "credit_score": 712,
    "debt_to_income": 0.31,
    "jurisdiction": "US",
}

result = app.invoke(application)
print(result)

For production banking systems, you would also persist result plus the full input payload to an immutable audit log before returning the decision downstream.

Production Considerations

  • Compliance and explainability

    • Keep policy decisions deterministic and versioned.
    • Store rule versions alongside each decision so auditors can reconstruct why a loan was approved or rejected.
  • Data residency

    • Pin execution to the correct region for customer data.
    • If you use external models or APIs for enrichment, verify they do not move regulated data outside approved jurisdictions.
  • Monitoring

    • Track approval rate, manual review rate, false positives on rejects, and drift in credit score distributions.
    • Alert when a rule change causes a sudden shift in outcomes by product line or geography.
  • Human override controls

    • Require underwriter approval for edge cases like thin-file applicants or high-value loans.
    • Log overrides separately from automated decisions so model governance can review them later.

Common Pitfalls

  • Putting compliance logic in an LLM prompt

    • Avoid this by keeping hard rules in Python nodes like policy_check().
    • Use LLMs only for summarization or document extraction where variability is acceptable.
  • Not separating policy from risk

    • If you mix them together, you cannot explain whether a rejection came from regulation or model output.
    • Keep distinct fields like policy_passed, risk_score, and decision.
  • Skipping audit persistence

    • A graph result alone is not enough for banking.
    • Persist inputs, node outputs, timestamps, rule versions, and final decisions in an immutable store.
  • Ignoring regional constraints

    • A loan agent that processes EU customer data in the wrong region will fail compliance review fast.
    • Enforce deployment boundaries at the infrastructure layer before the graph runs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides