How to Build a claims processing Agent Using LangGraph in Python for fintech
A claims processing agent in fintech takes an incoming claim, validates the request, gathers the required evidence, checks policy and transaction context, and routes the case to approval, rejection, or human review. It matters because claims are where money moves, risk is realized, and compliance gets audited; if the workflow is sloppy, you get fraud exposure, SLA breaches, and bad regulator conversations.
Architecture
A production claims agent for fintech usually needs these components:
- •
Ingress validator
- •Normalizes claim payloads from API, queue, or webhook sources.
- •Rejects malformed requests before they enter the graph.
- •
Policy/context fetcher
- •Pulls account status, transaction history, KYC flags, and product rules.
- •Keep this deterministic; don’t let the model invent policy.
- •
Decision engine
- •Uses rules plus LLM reasoning for edge cases.
- •Routes straightforward claims automatically and escalates ambiguous ones.
- •
Evidence extractor
- •Pulls structured facts from receipts, emails, chat logs, or PDFs.
- •Produces audit-friendly outputs with source references.
- •
Human review branch
- •Sends high-risk or low-confidence claims to an operations queue.
- •Required for compliance-heavy workflows and exception handling.
- •
Audit logger
- •Stores every state transition, tool call, and final decision.
- •Needed for explainability, dispute resolution, and regulator review.
Implementation
1) Define the state and routing logic
Use a typed state object so every node reads and writes predictable fields. In fintech workflows, that predictability matters more than clever prompts.
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class ClaimState(TypedDict):
claim_id: str
customer_id: str
amount: float
currency: str
documents: list[str]
risk_score: float
decision: Literal["approve", "reject", "review"]
reason: str
messages: Annotated[list, add_messages]
def route_claim(state: ClaimState) -> str:
if state["risk_score"] >= 0.8:
return "review"
if state["amount"] <= 250 and state["risk_score"] < 0.3:
return "approve"
return "reject"
2) Add deterministic nodes for policy checks and enrichment
Keep policy lookups outside the model where possible. If your rules change weekly because compliance changes weekly, this should be code or a rules engine first.
def enrich_claim(state: ClaimState) -> ClaimState:
# Replace with real service calls: KYC status, transaction history, sanctions flags
has_required_docs = len(state["documents"]) > 0
base_risk = 0.2 if has_required_docs else 0.9
return {
**state,
"risk_score": base_risk,
"reason": "Missing supporting documents" if not has_required_docs else "Docs present"
}
def approve_claim(state: ClaimState) -> ClaimState:
return {**state, "decision": "approve", "reason": "Auto-approved by policy"}
def reject_claim(state: ClaimState) -> ClaimState:
return {**state, "decision": "reject", "reason": f"Rejected: {state['reason']}"}
def review_claim(state: ClaimState) -> ClaimState:
return {**state, "decision": "review", "reason": f"Manual review required: {state['reason']}"}
3) Build the LangGraph workflow
This is the actual LangGraph pattern: define nodes on a StateGraph, wire conditional edges with add_conditional_edges, then compile it into an executable app.
workflow = StateGraph(ClaimState)
workflow.add_node("enrich", enrich_claim)
workflow.add_node("approve", approve_claim)
workflow.add_node("reject", reject_claim)
workflow.add_node("review", review_claim)
workflow.add_edge(START, "enrich")
workflow.add_conditional_edges(
"enrich",
route_claim,
{
"approve": "approve",
"reject": "reject",
"review": "review",
},
)
workflow.add_edge("approve", END)
workflow.add_edge("reject", END)
workflow.add_edge("review", END)
app = workflow.compile()
4) Run the graph with a real claim payload
In production you’d wrap this in FastAPI or consume from Kafka/SQS. The important part is that the graph returns a final state you can persist for audit.
initial_state = {
"claim_id": "CLM-10021",
"customer_id": "CUS-8891",
"amount": 180.0,
"currency": "USD",
"documents": ["receipt.pdf"],
"risk_score": 0.0,
"decision": "review",
"reason": "",
"messages": [],
}
result = app.invoke(initial_state)
print(result["decision"])
print(result["reason"])
If you want traceability across multiple steps or retries, use app.stream(...) instead of only invoke(...). That gives you step-by-step visibility into how a claim moved through the graph.
Production Considerations
- •
Auditability
- •Persist every input state, node output, and final decision with timestamps.
- •Store model prompts and tool responses if they affect disposition.
- •This is non-negotiable for disputes and internal controls.
- •
Data residency
- •Keep customer PII in-region and avoid sending sensitive fields to external model endpoints unless your legal posture allows it.
- •Redact account numbers, national IDs, card data, and free-text attachments before LLM calls.
- •If you operate across jurisdictions, split routing by region early in the workflow.
- •
Guardrails
- •Hard-code approval thresholds and sanction/fraud checks outside the model.
- •Use the LLM only for extraction or ambiguous classification.
- •Never let an unconstrained prompt decide payout eligibility on its own.
- •
Monitoring
- •Track approval rate, manual review rate, false rejection rate, latency per node, and retry counts.
- •Alert on drift in risk scores or sudden spikes in review routing.
- •In fintech ops teams care about bad automation more than slow automation.
Common Pitfalls
- •
Putting business rules inside prompts
- •Don’t ask the model to “decide according to policy” if policy changes need control and auditability.
- •Keep thresholds and eligibility logic in code or a rules service.
- •
Skipping human review paths
- •A claims agent that only approves/rejects will fail on edge cases.
- •Add a manual review branch for low confidence, missing evidence, sanctions hits, or contradictory data.
- •
Not versioning graph behavior
- •If you change nodes or routing logic without versioning, you lose reproducibility.
- •Version your graph definition alongside policy versions so old claims can be replayed exactly.
- •
Ignoring structured outputs
- •Free-form text decisions are hard to audit downstream.
- •Always write back normalized fields like
decision,reason,risk_score, and source references.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit