How to Build a loan approval Agent Using LangChain in Python for insurance
A loan approval agent for insurance evaluates incoming loan applications against policy rules, underwriting constraints, and risk signals, then returns a decision with a traceable explanation. In insurance, this matters because lending decisions often touch regulated data, internal risk appetite, and audit requirements, so the agent has to be deterministic enough for compliance and flexible enough to handle messy real-world inputs.
Architecture
- •
Input normalization layer
- •Converts raw application payloads into a structured schema.
- •Validates fields like income, debt ratio, policy status, and jurisdiction.
- •
Rules engine
- •Applies hard constraints first.
- •Example: active policy required, minimum tenure met, no sanctions flags.
- •
LLM reasoning layer
- •Handles ambiguous cases and produces a human-readable rationale.
- •Should never override hard compliance rules.
- •
Decision formatter
- •Returns
approve,reject, orreview. - •Includes reasons, confidence, and audit metadata.
- •Returns
- •
Audit logger
- •Stores inputs, outputs, model version, prompts, and rule hits.
- •Needed for regulatory review and internal dispute handling.
- •
Data access boundary
- •Keeps PII and residency-sensitive data inside approved systems.
- •Prevents accidental leakage into external tools or logs.
Implementation
1) Define the application schema and rule checks
Start with a strict schema. Insurance workflows fail when you let free-form JSON drift into the decision layer.
from typing import Literal
from pydantic import BaseModel, Field
class LoanApplication(BaseModel):
applicant_id: str
country: str
annual_income: float = Field(gt=0)
existing_debt: float = Field(ge=0)
policy_active: bool
policy_tenure_months: int = Field(ge=0)
claims_last_12m: int = Field(ge=0)
class Decision(BaseModel):
decision: Literal["approve", "reject", "review"]
reasons: list[str]
risk_score: float = Field(ge=0, le=1)
def hard_rules(app: LoanApplication) -> list[str]:
reasons = []
if not app.policy_active:
reasons.append("Policy is not active.")
if app.policy_tenure_months < 6:
reasons.append("Policy tenure below minimum threshold.")
if app.claims_last_12m > 3:
reasons.append("Too many claims in the last 12 months.")
return reasons
2) Build the LangChain prompt and structured output chain
Use ChatPromptTemplate plus with_structured_output() so the model returns a typed object instead of untrusted text.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a loan underwriting assistant for an insurance company. "
"Follow compliance rules strictly. If hard rules fail, recommend reject. "
"If information is incomplete or borderline, recommend review."),
("human",
"Application:\n{application}\n\nHard rule findings:\n{rule_findings}\n\n"
"Return a decision with concise reasons and a risk score from 0 to 1.")
])
structured_llm = llm.with_structured_output(Decision)
chain = prompt | structured_llm
3) Orchestrate deterministic rules before LLM reasoning
The pattern here is simple: rules first, LLM second. The model should explain or classify only after the policy gate has already done its job.
def evaluate_application(app_data: dict) -> Decision:
app = LoanApplication(**app_data)
rule_findings = hard_rules(app)
if rule_findings:
return Decision(
decision="reject",
reasons=rule_findings,
risk_score=1.0,
)
result = chain.invoke({
"application": app.model_dump(),
"rule_findings": []
})
return result
sample = {
"applicant_id": "A123",
"country": "ZA",
"annual_income": 420000,
"existing_debt": 90000,
"policy_active": True,
"policy_tenure_months": 18,
"claims_last_12m": 1
}
decision = evaluate_application(sample)
print(decision.model_dump())
4) Add audit logging around every decision
For insurance use cases, you need an immutable trail. Log the input hash, model name, prompt version, decision output, and which rules fired.
import json
import hashlib
from datetime import datetime
def audit_record(app_data: dict, decision: Decision) -> dict:
payload = json.dumps(app_data, sort_keys=True).encode()
return {
"timestamp_utc": datetime.utcnow().isoformat(),
"input_hash": hashlib.sha256(payload).hexdigest(),
"model": llm.model_name,
"decision": decision.decision,
"risk_score": decision.risk_score,
"reasons": decision.reasons,
"prompt_version": "v1"
}
record = audit_record(sample, decision)
print(record)
Production Considerations
- •
Keep PII out of prompts where possible
- •Tokenize names, IDs, phone numbers, and policy numbers before sending data to the LLM.
- •Insurance teams will care about data minimization as much as accuracy.
- •
Pin data residency by deployment region
- •If your insurer operates under regional processing rules, keep model calls inside approved cloud regions.
- •Don’t route underwriting data through unapproved third-party endpoints.
- •
Track every model version and prompt revision
- •A rejected application must be reproducible later.
- •Store the exact
ChatOpenAImodel name plus prompt template version in your audit log.
- •
Add human review for borderline cases
- •Any case with missing documents, conflicting claims history, or low-confidence signals should go to
review. - •Never let the LLM auto-approve exceptions that violate underwriting policy.
- •Any case with missing documents, conflicting claims history, or low-confidence signals should go to
Common Pitfalls
- •
Letting the LLM make final decisions without hard gates
- •Fix it by evaluating deterministic compliance rules before calling the model.
- •The model can explain; it should not override policy violations.
- •
Using free-form text outputs
- •Fix it with
with_structured_output()and Pydantic models. - •This prevents parsing errors and makes downstream automation reliable.
- •Fix it with
- •
Ignoring auditability
- •Fix it by logging input hashes, prompt versions, model names, and rule outcomes.
- •In insurance disputes or regulator reviews, “the model said so” is useless without evidence.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit