How to Build a loan approval Agent Using LangChain in Python for banking

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchainpythonbanking

A loan approval agent automates the first-pass decisioning flow for banking applications. It pulls applicant data, checks policy rules, scores risk, and returns a recommendation with an audit trail so humans can review edge cases instead of manually reading every file.

Architecture

  • Input ingestion layer

    • Accepts structured application data: income, debt, employment, credit score, collateral, jurisdiction.
    • Normalizes fields before they hit the model or rules engine.
  • Policy and compliance rules

    • Hard gates for KYC/AML flags, minimum income thresholds, DTI limits, residency constraints.
    • These should be deterministic, not left to the LLM.
  • LLM reasoning layer

    • Uses LangChain to summarize the case and explain why a recommendation was made.
    • Should not be the source of truth for approval logic.
  • Risk scoring tool

    • Encapsulates bank-specific scoring logic in a callable tool.
    • Lets the agent invoke structured calculations rather than hallucinating numbers.
  • Decision formatter

    • Produces a strict output schema: approve, reject, or manual_review.
    • Includes reason codes for audit and downstream case management.
  • Audit logging

    • Stores prompt inputs, tool calls, outputs, timestamps, model version, and reviewer notes.
    • Required for model governance and regulatory review.

Implementation

1) Define the application schema and deterministic policy checks

Keep policy checks outside the LLM. In banking, anything tied to compliance should be explicit Python logic.

from typing import Literal
from pydantic import BaseModel, Field

class LoanApplication(BaseModel):
    applicant_id: str
    country: str
    monthly_income: float = Field(gt=0)
    monthly_debt: float = Field(ge=0)
    credit_score: int = Field(ge=300, le=850)
    employment_years: float = Field(ge=0)
    requested_amount: float = Field(gt=0)

class Decision(BaseModel):
    decision: Literal["approve", "reject", "manual_review"]
    reason_code: str
    summary: str

def policy_gate(app: LoanApplication) -> tuple[bool, str]:
    if app.country not in {"US", "CA", "GB"}:
        return False, "DATA_RESIDENCY_OR_JURISDICTION_BLOCK"
    if app.credit_score < 580:
        return False, "LOW_CREDIT_SCORE"
    dti = app.monthly_debt / app.monthly_income
    if dti > 0.45:
        return False, "HIGH_DTI"
    return True, "PASS"

2) Build a LangChain tool for risk scoring

Use a real tool so the model can call controlled business logic. The agent should never invent DTI or affordability numbers.

from langchain_core.tools import tool

@tool
def calculate_risk_score(monthly_income: float, monthly_debt: float,
                         credit_score: int, employment_years: float) -> dict:
    dti = monthly_debt / monthly_income
    score = 100
    score -= int(dti * 100)
    score += (credit_score - 650) // 10
    score += int(min(employment_years * 2, 20))

    if score >= 80:
        band = "low"
    elif score >= 60:
        band = "medium"
    else:
        band = "high"

    return {
        "dti": round(dti, 4),
        "risk_score": score,
        "risk_band": band,
    }

3) Wire the agent with LangChain and force structured output

This pattern uses ChatOpenAI, bind_tools, and with_structured_output so the response is machine-readable. That matters when you need to push decisions into a loan origination system.

import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(
    model="gpt-4o-mini",
    temperature=0,
    api_key=os.environ["OPENAI_API_KEY"],
)

prompt = ChatPromptTemplate.from_messages([
    ("system",
     "You are a banking loan decision assistant. "
     "Use policy results and tool outputs only. "
     "Return one of approve, reject, or manual_review."),
    ("human",
     "Application:\n{application}\n\n"
     "Policy result:\n{policy_result}\n\n"
     "Risk result:\n{risk_result}\n")
])

structured_llm = llm.with_structured_output(Decision)

def evaluate_application(app: LoanApplication) -> Decision:
    passed, policy_reason = policy_gate(app)
    if not passed:
        return Decision(
            decision="reject",
            reason_code=policy_reason,
            summary=f"Rejected by policy gate: {policy_reason}"
        )

    risk_result = calculate_risk_score.invoke({
        "monthly_income": app.monthly_income,
        "monthly_debt": app.monthly_debt,
        "credit_score": app.credit_score,
        "employment_years": app.employment_years,
    })

    chain = prompt | structured_llm
    return chain.invoke({
        "application": app.model_dump(),
        "policy_result": {"passed": True, "reason": policy_reason},
        "risk_result": risk_result,
    })

app = LoanApplication(
    applicant_id="A123",
    country="US",
    monthly_income=8000,
    monthly_debt=2200,
    credit_score=720,
    employment_years=5,
    requested_amount=25000,
)

decision = evaluate_application(app)
print(decision.model_dump())

4) Add audit logging around every decision

For banking workloads you need traceability. Log input payloads, rule outcomes, model outputs, and versioned metadata.

import json
from datetime import datetime

def audit_log(app: LoanApplication, decision: Decision, meta: dict) -> None:
    record = {
        "timestamp_utc": datetime.utcnow().isoformat(),
        "applicant_id": app.applicant_id,
        "application": app.model_dump(),
        "decision": decision.model_dump(),
        "meta": meta,
    }
    
# Replace this with your SIEM / immutable store / database write.
# Keep PII controls and retention policies aligned with bank policy.

Production Considerations

  • Separate policy from inference

    • Compliance rules belong in code or rules engines.
    • Use the LLM for explanation and triage only.
  • Log everything needed for audit

    • Store prompt inputs, tool invocations via .invoke(), model name, temperature, timestamps.
    • Keep records immutable where possible.
  • Control data residency

    • Route EU/UK customer data to approved regions only.
    • If your bank has strict residency requirements, avoid sending raw PII to external endpoints without legal approval.
  • Add human-in-the-loop thresholds

    • Auto-approve only low-risk cases with clean policy checks.
    • Send borderline cases to manual review instead of forcing a binary answer.

Common Pitfalls

  • Letting the LLM decide approvals directly

    • Bad pattern: “Approve this applicant” from free-form text.
    • Fix it by using deterministic gates plus structured output like Decision.
  • Skipping schema validation

    • If you pass raw JSON without Pydantic validation, bad upstream data will leak into decisions.
    • Use LoanApplication and reject invalid records early.
  • Ignoring jurisdiction and residency constraints

    • A loan agent that processes restricted customer data in the wrong region is a compliance problem.
    • Enforce country-based routing before any model call.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides