How to Build a loan approval Agent Using LangChain in Python for payments
A loan approval agent for payments decides whether a payment-linked credit request should move forward, get routed for manual review, or be rejected. In practice, it sits between your payment app and your risk systems, turning raw customer and transaction data into a structured decision with an audit trail. That matters because every approval or decline affects fraud exposure, compliance posture, and customer conversion.
Architecture
- •
Input layer
- •Collects applicant profile, transaction history, repayment signals, and KYC/KYB fields.
- •Normalizes data into a single payload before the model sees it.
- •
Policy engine
- •Applies hard rules before any LLM call.
- •Examples: sanctions hit, missing consent, jurisdiction restrictions, minimum age, blacklisted BINs.
- •
LangChain decision chain
- •Uses
ChatOpenAIwith structured output to classify the case. - •Produces a deterministic schema like
approve,reject, ormanual_review.
- •Uses
- •
Risk enrichment tools
- •Pulls data from internal services: credit bureau, fraud score, ledger balance, chargeback rate.
- •Exposed to LangChain through
@toolfunctions.
- •
Audit logger
- •Stores input snapshot, tool outputs, model decision, and rationale.
- •Required for disputes, model governance, and regulator review.
- •
Decision router
- •Converts the agent output into downstream actions: approve payment credit line, request more docs, or escalate.
Implementation
1) Define the decision schema
Use Pydantic so the agent returns a strict structure. For payments workflows, you want the result to be machine-readable and easy to audit.
from typing import Literal
from pydantic import BaseModel, Field
class LoanDecision(BaseModel):
decision: Literal["approve", "reject", "manual_review"] = Field(
description="Final underwriting decision"
)
confidence: float = Field(ge=0.0, le=1.0)
reason: str = Field(description="Short explanation for audit logs")
required_action: str = Field(
description="Next step for operations or customer workflow"
)
2) Build tools for risk enrichment
LangChain tools let the agent fetch internal signals without stuffing everything into the prompt. Keep these calls narrow and deterministic.
from langchain_core.tools import tool
@tool
def get_fraud_score(customer_id: str) -> int:
"""Return an internal fraud score from 0 to 100."""
scores = {"cust_001": 12, "cust_002": 81}
return scores.get(customer_id, 50)
@tool
def get_chargeback_rate(customer_id: str) -> float:
"""Return chargeback rate over the last 90 days."""
rates = {"cust_001": 0.01, "cust_002": 0.18}
return rates.get(customer_id, 0.05)
@tool
def get_kYC_status(customer_id: str) -> str:
"""Return KYC status."""
statuses = {"cust_001": "verified", "cust_002": "pending"}
return statuses.get(customer_id, "unknown")
3) Wire up the LangChain agent
For this use case I prefer a structured chain over a free-form agent loop. The pattern below uses ChatOpenAI, with_structured_output, and tool-enriched context to produce a clean underwriting decision.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0,
api_key=os.environ["OPENAI_API_KEY"],
)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a loan approval assistant for payments. "
"Use only the provided data. "
"Apply strict compliance rules: if KYC is not verified or fraud score is high, "
"prefer manual_review or reject. Return a concise underwriting decision."),
("human",
"Customer ID: {customer_id}\n"
"Requested amount: {amount}\n"
"Fraud score: {fraud_score}\n"
"Chargeback rate: {chargeback_rate}\n"
"KYC status: {kyc_status}\n"
"Jurisdiction: {jurisdiction}\n"
"Payment method risk: {payment_method_risk}")
])
structured_llm = llm.with_structured_output(LoanDecision)
chain = prompt | structured_llm
def decide_loan(customer_id: str, amount: float, jurisdiction: str,
payment_method_risk: str):
fraud_score = get_fraud_score.invoke({"customer_id": customer_id})
chargeback_rate = get_chargeback_rate.invoke({"customer_id": customer_id})
kyc_status = get_kYC_status.invoke({"customer_id": customer_id})
result = chain.invoke({
"customer_id": customer_id,
"amount": amount,
"fraud_score": fraud_score,
"chargeback_rate": chargeback_rate,
"kyc_status": kyc_status,
"jurisdiction": jurisdiction,
"payment_method_risk": payment_method_risk,
})
return result
4) Add policy gates before execution
Don’t let the LLM override hard compliance rules. In payments this means sanctions screening, consent checks, residency constraints, and product eligibility should happen before model inference.
def policy_gate(payload: dict) -> None:
if payload["kyc_status"] != "verified":
raise ValueError("KYC not verified")
if payload["jurisdiction"] in {"IR", "KP", "SY"}:
raise ValueError("Restricted jurisdiction")
if payload["fraud_score"] >= 75:
raise ValueError("Fraud score too high")
payload = {
"customer_id": "cust_001",
"amount": 5000,
}
payload["fraud_score"] = get_fraud_score.invoke({"customer_id": payload["customer_id"]})
payload["chargeback_rate"] = get_chargeback_rate.invoke({"customer_id": payload["customer_id"]})
payload["kyc_status"] = get_kYC_status.invoke({"customer_id": payload["customer_id"]})
payload["jurisdiction"] = "US"
payload["payment_method_risk"] = "medium"
policy_gate(payload)
decision = chain.invoke(payload)
print(decision.model_dump())
Production Considerations
- •
Deploy behind an internal API boundary
- •Keep the agent off public internet paths.
- •Put it behind authN/authZ and require signed requests from your payment orchestration service.
- •
Log every decision with immutable audit records
- •Store input features, tool responses, model version, prompt version, and final decision.
- •This is non-negotiable for dispute handling and regulator questions.
- •
Enforce data residency
- •If applicant data must stay in-region, host your model endpoint and vector stores accordingly.
- •Avoid sending raw PII across borders; redact before inference where possible.
- •
Add monitoring for drift and false approvals
- •Track approval rate by cohort, manual review rate, chargebacks after approval, and KYC failure rates.
- •Alert when distributions shift or when one jurisdiction starts producing abnormal outcomes.
Common Pitfalls
- •
Letting the LLM make policy decisions
- •Mistake: asking the model to decide on sanctions or residency rules.
- •Fix: handle those with deterministic pre-checks before any LangChain call.
- •
Passing raw PII into prompts
- •Mistake: including full account numbers or identity documents in prompt text.
- •Fix: tokenize or redact sensitive fields; pass only what the underwriting logic needs.
- •
Using free-form text outputs
- •Mistake: parsing natural language decisions from the model response.
- •Fix: use
with_structured_output()and a strict Pydantic schema so downstream systems can trust the shape.
- •
Skipping human review on edge cases
- •Mistake: auto-rejecting borderline applications that need context.
- •Fix: route low-confidence cases to
manual_reviewand preserve reviewer notes in the audit log.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit