How to Build a underwriting Agent Using LangGraph in Python for fintech

By Cyprian AaronsUpdated 2026-04-21
underwritinglanggraphpythonfintech

An underwriting agent takes a loan or credit application, gathers the right signals, checks policy rules, scores risk, and produces a decision with an audit trail. For fintech, that matters because you need fast decisions without losing control over compliance, explainability, and deterministic policy enforcement.

Architecture

  • Application intake
    • Normalizes borrower data from API payloads, KYC results, bank statements, and bureau pulls.
  • Policy engine
    • Applies hard underwriting rules like minimum income, DTI thresholds, fraud flags, residency restrictions, and product eligibility.
  • Risk assessment node
    • Uses an LLM or scoring service to summarize risk factors and generate a structured recommendation.
  • Decision node
    • Converts policy + risk outputs into approve, review, or decline.
  • Audit logger
    • Persists every intermediate state for model governance, adverse action reasons, and regulator review.
  • Human review handoff
    • Routes borderline cases to an analyst when confidence is low or policy exceptions are detected.

Implementation

1) Define the state and the graph nodes

Use a typed state object so every step in the workflow has a clear contract. In fintech systems, this keeps your underwriting logic testable and easier to audit.

from typing import TypedDict, Literal, List, Dict, Any
from langgraph.graph import StateGraph, START, END

Decision = Literal["approve", "review", "decline"]

class UnderwritingState(TypedDict, total=False):
    application: Dict[str, Any]
    policy_checks: Dict[str, Any]
    risk_summary: Dict[str, Any]
    decision: Decision
    reasons: List[str]
    audit_log: List[Dict[str, Any]]

def intake_node(state: UnderwritingState) -> UnderwritingState:
    app = state["application"]
    return {
        "application": app,
        "audit_log": state.get("audit_log", []) + [{"step": "intake", "status": "ok"}],
    }

def policy_node(state: UnderwritingState) -> UnderwritingState:
    app = state["application"]
    reasons = []
    passed = True

    if app["monthly_income"] < 3000:
        passed = False
        reasons.append("income_below_threshold")
    if app["debt_to_income"] > 0.45:
        passed = False
        reasons.append("dti_above_threshold")
    if app.get("country") not in {"US", "CA"}:
        passed = False
        reasons.append("unsupported_residency")

    return {
        "policy_checks": {"passed": passed},
        "reasons": state.get("reasons", []) + reasons,
        "audit_log": state.get("audit_log", []) + [{"step": "policy", "passed": passed}],
    }

def risk_node(state: UnderwritingState) -> UnderwritingState:
    app = state["application"]
    score = 720 if app["credit_score"] >= 700 else 580
    return {
        "risk_summary": {
            "score": score,
            "risk_band": "low" if score >= 700 else "high",
        },
        "audit_log": state.get("audit_log", []) + [{"step": "risk", "score": score}],
    }

2) Add deterministic routing for approve/review/decline

Keep the final decision rule-based. You can still use an LLM to summarize risk factors, but do not let it override hard policy gates.

def decision_node(state: UnderwritingState) -> UnderwritingState:
    policy_passed = state["policy_checks"]["passed"]
    score = state["risk_summary"]["score"]

    if not policy_passed:
        decision = "decline"
    elif score >= 700:
        decision = "approve"
    else:
        decision = "review"

    return {
        "decision": decision,
        "audit_log": state.get("audit_log", []) + [{"step": "decision", "decision": decision}],
    }

graph = StateGraph(UnderwritingState)
graph.add_node("intake", intake_node)
graph.add_node("policy", policy_node)
graph.add_node("risk", risk_node)
graph.add_node("decision", decision_node)

graph.add_edge(START, "intake")
graph.add_edge("intake", "policy")
graph.add_edge("policy", "risk")
graph.add_edge("risk", "decision")
graph.add_edge("decision", END)

underwriting_app = graph.compile()

This is the core LangGraph pattern: build a StateGraph, register nodes with add_node, wire them with add_edge, then call compile().

3) Run the agent on a real application payload

The compiled graph returns the final state after all nodes execute. In production you would load this input from your underwriting API or queue consumer.

application = {
    "applicant_id": "A-10091",
    "monthly_income": 6500,
    "debt_to_income": 0.31,
    "country": "US",
    "credit_score": 712,
}

result = underwriting_app.invoke({"application": application})

print(result["decision"])
print(result["reasons"])
print(result["audit_log"])

If you want to branch into human review only when needed, add conditional edges with add_conditional_edges. That lets you keep the workflow explicit instead of hiding business logic inside one giant function.

4) Add a review branch for exceptions

This is where LangGraph becomes useful for operational underwriting. Borderline cases can be routed to an analyst queue while clean approvals continue automatically.

def route_after_risk(state: UnderwritingState) -> str:
    if not state["policy_checks"]["passed"]:
        return "decision"
    if state["risk_summary"]["score"] < 680:
        return "review"
    return "decision"

def review_node(state: UnderwritingState) -> UnderwritingState:
    return {
        "decision": "review",
        "reasons": state.get("reasons", []) + ["manual_review_required"],
        "audit_log": state.get("audit_log", []) + [{"step": "review", "status": "queued"}],
    }

graph2 = StateGraph(UnderwritingState)
graph2.add_node("intake", intake_node)
graph2.add_node("policy", policy_node)
graph2.add_node("risk", risk_node)
graph2.add_node("review", review_node)
graph2.add_node("decision", decision_node)

graph2.add_edge(START, "intake")
graph2.add_edge("intake", "policy")
graph2.add_edge("policy", "_endless")  # don't do this; see pitfalls

Use conditional routing instead of forcing every path through the same sequence. The point is to keep exception handling visible in the graph so compliance teams can inspect it.

Production Considerations

  • Persist full audit traces
    • Store every node input/output with timestamps and model version.
    • You will need this for adverse action notices, internal QA, and regulator requests.
  • Enforce data residency
    • Keep PII in-region and avoid sending sensitive fields to external models unless your vendor contract supports it.
    • Tokenize or redact account numbers, SSNs/NINs, and bank statement details before any LLM call.
  • Add guardrails around decisions
    • Hard-code disqualifying rules outside the model.
    • The model can summarize; it should not invent underwriting policy.
  • Monitor drift and exception rates
    • Track approval rate by segment, manual review rate, false positives on fraud flags, and reason-code distribution.
    • Spikes usually mean either upstream data quality issues or a broken rule change.

Common Pitfalls

  • Letting the LLM make the final credit decision
    • Avoid this by separating explanation generation from deterministic approval logic.
    • Use rules for eligibility and thresholds; use the model for summarization only.
  • Not versioning policy logic
    • If your DTI threshold changes from 0.45 to 0.40, that is a material underwriting change.
    • Version both code and configuration so you can reproduce historical decisions.
  • Skipping structured outputs
    • Free-form text is hard to validate and harder to audit.
    • Return dictionaries with explicit keys like score, risk_band, reasons, and decision.
  • Ignoring fallback paths
    • Every automated decline should have a reason code and every uncertain case should route to manual review.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides