How to Build a claims processing Agent Using LangChain in Python for payments

By Cyprian AaronsUpdated 2026-04-21
claims-processinglangchainpythonpayments

A claims processing agent for payments takes an incoming claim, extracts the key fields, checks policy and payment rules, decides whether it can be auto-approved, and routes exceptions to a human. In payments, that matters because bad automation creates chargebacks, compliance issues, duplicate payouts, and audit gaps.

Architecture

  • Claim intake layer

    • Accepts JSON from your API, webhook, or queue.
    • Normalizes fields like claim_id, amount, currency, merchant_id, and reason_code.
  • LangChain orchestration layer

    • Uses a ChatPromptTemplate plus a chat model to classify and extract structured outputs.
    • Keeps the agent deterministic enough for finance workflows.
  • Policy/rules engine

    • Applies hard checks outside the LLM:
      • amount thresholds
      • duplicate claim detection
      • allowed reason codes
      • KYC/AML flags
    • This is where payment compliance lives.
  • Payment system tools

    • Tools for looking up transaction history, settlement status, and prior disputes.
    • Expose only narrow functions through LangChain tools.
  • Decision and audit layer

    • Produces an approval/reject/escalate decision with a traceable rationale.
    • Writes every input/output pair to immutable logs for audit.

Implementation

1) Define the claim schema and the tool functions

Use Pydantic for structured input/output. Keep business logic in Python, not inside prompts.

from typing import Literal, Optional
from pydantic import BaseModel, Field

class ClaimInput(BaseModel):
    claim_id: str
    merchant_id: str
    amount: float
    currency: str = Field(default="USD")
    reason_code: str
    transaction_id: str

class ClaimDecision(BaseModel):
    decision: Literal["approve", "reject", "escalate"]
    confidence: float = Field(ge=0.0, le=1.0)
    rationale: str
    duplicate_suspected: bool = False
    compliance_flag: bool = False

def lookup_transaction(transaction_id: str) -> dict:
    # Replace with real payments DB/API call.
    return {
        "transaction_id": transaction_id,
        "status": "settled",
        "amount": 49.99,
        "currency": "USD",
        "chargeback_count_30d": 0,
    }

def is_duplicate_claim(claim_id: str) -> bool:
    # Replace with idempotency store / claims DB lookup.
    return False

2) Build a structured extraction chain with LangChain

For claims processing, you want structured output, not free-form text. PydanticOutputParser gives you a typed contract.

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import PydanticOutputParser
from langchain_openai import ChatOpenAI

parser = PydanticOutputParser(pydantic_object=ClaimDecision)

prompt = ChatPromptTemplate.from_messages([
    ("system",
     "You are a payments claims reviewer. "
     "Use only the provided data. "
     "Do not invent facts. "
     "Return a decision that follows the schema."),
    ("human",
     "Claim:\n{claim}\n\nTransaction:\n{transaction}\n\n"
     "{format_instructions}")
])

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

chain = prompt | llm | parser

claim = ClaimInput(
    claim_id="clm_1001",
    merchant_id="m_7788",
    amount=49.99,
    currency="USD",
    reason_code="service_not_received",
    transaction_id="tx_9001",
)

transaction = lookup_transaction(claim.transaction_id)

result = chain.invoke({
    "claim": claim.model_dump(),
    "transaction": transaction,
    "format_instructions": parser.get_format_instructions(),
})

print(result.model_dump())

3) Add hard rules before the model makes a recommendation

Do not let the LLM override payment policy. Use Python for deterministic gates, then let LangChain handle judgment on ambiguous cases.

def rule_based_gate(claim: ClaimInput, tx: dict) -> Optional[ClaimDecision]:
    if is_duplicate_claim(claim.claim_id):
        return ClaimDecision(
            decision="reject",
            confidence=1.0,
            rationale="Duplicate claim detected in idempotency store.",
            duplicate_suspected=True,
            compliance_flag=False,
        )

    if tx["status"] != "settled":
        return ClaimDecision(
            decision="escalate",
            confidence=0.95,
            rationale=f"Transaction status is {tx['status']}, not settled.",
            compliance_flag=True,
        )

    if claim.amount != tx["amount"] or claim.currency != tx["currency"]:
        return ClaimDecision(
            decision="escalate",
            confidence=0.9,
            rationale="Claim amount or currency does not match original transaction.",
            compliance_flag=True,
        )

    return None

gate_result = rule_based_gate(claim, transaction)
if gate_result:
    print(gate_result.model_dump())
else:
    print(result.model_dump())

4) Wrap it in an auditable service flow

The key pattern is: validate → gate → LLM review → persist decision. That gives you traceability for disputes and regulators.

from datetime import datetime
import json

def process_claim(claim_payload: dict) -> dict:
    claim = ClaimInput(**claim_payload)
    tx = lookup_transaction(claim.transaction_id)

    precheck = rule_based_gate(claim, tx)
    if precheck:
        decision = precheck
        source = "rules"
    else:
        decision = chain.invoke({
            "claim": claim.model_dump(),
            "transaction": tx,
            "format_instructions": parser.get_format_instructions(),
        })
        source = "llm"

    audit_event = {
        "timestamp": datetime.utcnow().isoformat(),
        "claim_id": claim.claim_id,
        "merchant_id": claim.merchant_id,
        "source": source,
        "decision": decision.model_dump(),
        "input_hash": hash(json.dumps(claim.model_dump(), sort_keys=True)),
    }

    # Write audit_event to your immutable log / SIEM / warehouse here.
    return audit_event

print(process_claim(claim.model_dump()))

Production Considerations

  • Keep payment decisions explainable

    • Store the exact prompt version, model version, tool outputs, and final decision.
    • If finance or compliance asks why a payout was blocked, you need a full trail.
  • Separate residency-sensitive data

    • Claims often contain PII and card-linked metadata.
    • Keep EU customer data in-region and avoid sending raw PANs or unnecessary identifiers to external models.
  • Add strict guardrails

    • Enforce allowlisted reason codes and bounded amounts before any model call.
    • Reject malformed payloads at the API edge with schema validation.
  • Monitor drift and exception rates

    • Track auto-approval rate, escalation rate, duplicate rejection rate, and false positives.
    • A sudden spike usually means upstream fraud patterns changed or your prompts drifted.

Common Pitfalls

  • Letting the LLM decide on raw payment facts

    • Bad pattern: asking the model to infer settlement status or duplicate claims from vague text.
    • Fix: fetch authoritative data from your ledger or PSP first.
  • Skipping idempotency

    • Duplicate webhook delivery will happen.
    • Fix: store claim_id in an idempotency table and short-circuit repeats before calling the model.
  • Logging sensitive data without controls

    • Claims payloads can include names, emails, account references, and dispute notes.
    • Fix: redact PII before logs leave the service boundary and keep audit storage access-controlled.

For payments teams, the right pattern is not “agent first.” It is rules first, agent second, audit always. That keeps automation useful without turning claims into an uncontrolled liability.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides