How to Build a claims processing Agent Using LangGraph in Python for lending
A claims processing agent for lending takes an incoming borrower claim, classifies it, checks required evidence, verifies policy and loan context, and routes the case to either straight-through processing or human review. It matters because lending claims are high-friction, regulated workflows: bad routing creates SLA breaches, compliance risk, and expensive manual handling.
Architecture
- •
Ingress layer
- •Accepts claim payloads from a portal, CRM, or case management system.
- •Normalizes fields like
loan_id,claim_type,jurisdiction, anddocuments.
- •
Policy retrieval component
- •Pulls lending policy rules, product terms, and jurisdiction-specific requirements.
- •Keeps the agent grounded in approved source material.
- •
Decision node
- •Classifies the claim and decides whether the file is complete enough for automated handling.
- •Flags cases that need human review: missing docs, suspicious patterns, or restricted jurisdictions.
- •
Verification tools
- •Check loan status, borrower identity, payment history, collateral records, and document presence.
- •These should be deterministic functions, not LLM guesses.
- •
Audit logger
- •Writes every state transition, tool call, and decision reason.
- •Needed for compliance reviews and dispute resolution.
- •
Human escalation path
- •Routes exceptions to an underwriter or claims analyst.
- •Preserves context so the reviewer sees why the agent escalated.
Implementation
1. Define the graph state and tools
Use a typed state object so every node knows what data it can read and write. For lending workflows, keep the state explicit: claim metadata, verification results, decision outcome, and audit trail.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class ClaimState(TypedDict):
claim_id: str
loan_id: str
jurisdiction: str
claim_type: str
documents: list[str]
verified_docs: list[str]
risk_flag: bool
decision: str
audit_log: list[str]
def append_audit(state: ClaimState) -> ClaimState:
state["audit_log"].append("state_updated")
return state
def verify_documents(state: ClaimState) -> ClaimState:
required = {"id_proof", "claim_form"}
provided = set(state["documents"])
missing = required - provided
state["verified_docs"] = list(provided & required)
state["risk_flag"] = len(missing) > 0
state["audit_log"].append(f"docs_checked_missing={list(missing)}")
return state
2. Add a decision node with lending-specific routing
The core pattern is simple: verify first, then decide. In lending, any uncertainty around identity, jurisdictional restrictions, or incomplete evidence should bias toward escalation.
def decide(state: ClaimState) -> ClaimState:
if state["jurisdiction"] in {"restricted_state_1", "restricted_state_2"}:
state["decision"] = "human_review"
state["audit_log"].append("jurisdiction_blocked")
return state
if state["risk_flag"]:
state["decision"] = "human_review"
state["audit_log"].append("incomplete_claim")
return state
if state["claim_type"] in {"payment_dispute", "forgiveness_request"}:
state["decision"] = "manual_policy_check"
state["audit_log"].append("policy_sensitive_claim")
return state
state["decision"] = "auto_process"
state["audit_log"].append("eligible_for_straight_through")
return state
3. Build the LangGraph workflow
This is the actual LangGraph pattern using StateGraph, add_edge, add_conditional_edges, compile, invoke. The graph stays small on purpose; production systems usually wrap this with tool nodes and retrieval nodes.
def route(state: ClaimState) -> str:
return state["decision"]
builder = StateGraph(ClaimState)
builder.add_node("verify_documents", verify_documents)
builder.add_node("decide", decide)
builder.add_edge(START, "verify_documents")
builder.add_edge("verify_documents", "decide")
builder.add_conditional_edges(
"decide",
route,
{
"auto_process": END,
"manual_policy_check": END,
"human_review": END,
},
)
graph = builder.compile()
result = graph.invoke({
"claim_id": "CLM-10021",
"loan_id": "LN-88441",
"jurisdiction": "CA",
"claim_type": "payment_dispute",
"documents": ["id_proof", "claim_form"],
"verified_docs": [],
"risk_flag": False,
"decision": "",
"audit_log": [],
})
print(result["decision"])
print(result["audit_log"])
4. Add tool-backed verification for real systems
In production, document presence is not enough. Replace static checks with deterministic integrations to your LOS/LMS, KYC provider, document store, and policy service. Keep those calls behind functions so you can test them independently and log every response.
A practical pattern is:
- •fetch loan status from LMS
- •fetch policy constraints by product + jurisdiction
- •validate documents against a schema
- •write a signed audit event before returning control to the graph
Production Considerations
- •
Compliance first
- •Store the policy version used for each decision.
- •Keep immutable audit logs with timestamps, actor IDs, model version, and tool outputs.
- •Make sure adverse decisions are explainable to operations and compliance teams.
- •
Data residency
- •Claims data often includes PII and financial records.
- •Pin execution to approved regions and avoid sending raw documents to external services outside your residency boundary.
- •Redact sensitive fields before passing context into LLM prompts.
- •
Monitoring
- •Track escalation rate, auto-process rate, average handling time, and override rate by jurisdiction.
- •Alert on spikes in “human_review” for specific products; that usually means a policy drift or upstream data issue.
- •Log graph transitions so you can reconstruct why a claim moved through each node.
- •
Guardrails
- •Never let the model invent eligibility rules.
- •Use retrieval for policy text and deterministic code for eligibility checks.
- •Reject claims when required fields are missing instead of asking the model to infer them.
Common Pitfalls
- •
Using the LLM as the source of truth
- •Bad move in lending. Eligibility must come from policy docs plus system-of-record checks.
- •Fix it by keeping classification separate from verification.
- •
Skipping jurisdiction logic
- •A claim that is valid in one region may be restricted in another.
- •Fix it by making jurisdiction an explicit field in graph state and branching early on it.
- •
Weak auditability
- •If you cannot explain why a claim was auto-approved or escalated, you cannot defend the workflow during review.
- •Fix it by writing structured audit events at every node transition.
- •
Overloading one graph with everything
- •Claims intake, underwriting exceptions, fraud checks, and payment adjustments should not all live in one monolith.
- •Fix it by splitting into smaller graphs that hand off through well-defined states.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit