How to Build a underwriting Agent Using LangChain in Python for payments
An underwriting agent for payments takes in transaction, merchant, and risk signals, then returns a decision support package: approve, review, decline, or request more data. In payments, that matters because bad underwriting creates fraud losses, chargebacks, compliance exposure, and operational drag; good underwriting reduces manual review without letting risk leak into production.
Architecture
- •Input normalizer
- •Converts raw merchant application data, transaction history, MCC, geography, and KYC/KYB fields into a consistent schema.
- •Risk retrieval layer
- •Pulls policy snippets, risk rules, sanctions guidance, and prior underwriting notes from a controlled knowledge base.
- •LangChain reasoning chain
- •Uses an LLM with structured output to classify risk and explain the decision in a deterministic format.
- •Policy engine
- •Applies hard constraints outside the model: prohibited geographies, high-risk MCCs, missing KYB fields, velocity thresholds.
- •Audit logger
- •Stores inputs, model version, prompt hash, retrieved documents, and final decision for compliance review.
- •Decision orchestrator
- •Combines model output and policy results into a final underwriting action.
Implementation
1) Define the underwriting schema
Use Pydantic so the agent returns structured output instead of free-form text. For payments workflows, this is non-negotiable because downstream systems need machine-readable decisions.
from typing import Literal
from pydantic import BaseModel, Field
class UnderwritingDecision(BaseModel):
decision: Literal["approve", "manual_review", "decline"] = Field(...)
risk_score: int = Field(ge=0, le=100)
rationale: str
flags: list[str]
2) Build the LangChain chain with structured output
This pattern uses ChatPromptTemplate, ChatOpenAI, and .with_structured_output() to keep the response bounded. The model should explain its reasoning in terms your ops team can audit later.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a payments underwriting assistant. "
"Use only the provided merchant data and policy context. "
"Return a structured decision for approve, manual_review, or decline."),
("human",
"Merchant data:\n{merchant_data}\n\n"
"Policy context:\n{policy_context}\n\n"
"Underwrite this merchant.")
])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
underwriter = llm.with_structured_output(UnderwritingDecision)
chain = prompt | underwriter
3) Add policy retrieval and hard rules
For production payments systems, don’t let the LLM be the source of truth on restricted categories. Use retrieval for policy context plus explicit rules for disallowed cases.
def hard_policy_check(merchant_data: dict) -> tuple[bool, list[str]]:
flags = []
if merchant_data.get("country") in {"IR", "KP", "SY"}:
flags.append("restricted_country")
if merchant_data.get("mcc") in {"7995", "4829"}:
flags.append("high_risk_mcc")
if not merchant_data.get("kyb_verified"):
flags.append("missing_kyb")
return (len(flags) == 0), flags
merchant_data = {
"name": "Example Merchant LLC",
"country": "US",
"mcc": "5812",
"kyb_verified": True,
"monthly_volume_usd": 85000,
"chargeback_rate": 0.018,
}
policy_context = """
- Approve low-risk merchants with verified KYB and chargeback rate below 2%.
- Manual review if monthly volume exceeds $100k or if supporting docs are incomplete.
- Decline restricted countries and prohibited MCCs.
"""
ok_to_proceed, flags = hard_policy_check(merchant_data)
if not ok_to_proceed:
result = UnderwritingDecision(
decision="decline" if "restricted_country" in flags else "manual_review",
risk_score=100,
rationale="Hard policy violation detected before LLM evaluation.",
flags=flags,
)
else:
result = chain.invoke({
"merchant_data": merchant_data,
"policy_context": policy_context,
})
print(result.model_dump())
4) Wrap it with audit logging
Payments teams need traceability across decisions. Log enough to reconstruct why a merchant was approved or blocked without storing unnecessary sensitive data.
import json
from datetime import datetime
def audit_event(merchant_id: str, decision: UnderwritingDecision, model_name: str):
event = {
"timestamp": datetime.utcnow().isoformat(),
"merchant_id": merchant_id,
"model_name": model_name,
"decision": decision.decision,
"risk_score": decision.risk_score,
"flags": decision.flags,
"rationale": decision.rationale,
}
print(json.dumps(event))
audit_event("m_12345", result, "gpt-4o-mini")
Production Considerations
- •Keep hard controls outside the model
- •Sanctions checks, restricted MCCs, country blocks, and velocity limits should run before any LLM call.
- •Log for auditability
- •Persist prompt version, model version, retrieved policy docs, input hashes, and final actions. This is what compliance teams will ask for during reviews.
- •Respect data residency
- •Merchant PII and bank account details may need regional processing or redaction before sending to an external model endpoint.
- •Add human-in-the-loop thresholds
- •Route borderline cases to manual review when confidence is low or when volume/risk crosses defined thresholds.
Common Pitfalls
- •Letting the LLM make final compliance calls
- •Fix it by using deterministic policy checks first. The model should recommend; your rules engine should decide on prohibited cases.
- •Returning unstructured text
- •Fix it by using
with_structured_output()and Pydantic models so downstream services can consume stable fields.
- •Fix it by using
- •Ignoring audit requirements
- •Fix it by storing every decision with timestamps, input fingerprints, retrieved policy context, and model identifiers.
- •Sending too much sensitive data to the model
- •Fix it by redacting PANs, account numbers, tax IDs, and other regulated fields before inference.
A good payments underwriting agent is not just an LLM wrapped around a prompt. It is a controlled decision system where LangChain handles orchestration and language reasoning while your code enforces compliance boundaries that actually matter in production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit