How to Build a customer support Agent Using LangGraph in Python for payments

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphpythonpayments

A payments customer support agent handles the repetitive, high-volume cases that hit support teams every day: failed card charges, delayed bank transfers, refund status checks, and dispute updates. It matters because these requests are time-sensitive, regulated, and full of sensitive data, so the agent has to be accurate, auditable, and tightly controlled.

Architecture

A production payments support agent built with LangGraph usually needs these components:

  • Conversation state

    • Stores the user message, extracted payment intent, case metadata, and tool results.
    • Keep this minimal and structured. Do not dump raw PII into state unless you need it.
  • Intent router

    • Classifies the request into payment support categories like refund_status, charge_failed, payout_delay, or dispute.
    • This is where you decide whether to answer directly or call a backend system.
  • Policy and compliance gate

    • Checks whether the request can be handled automatically.
    • Blocks risky flows like card number handling, account takeover signals, or requests that require human review.
  • Payment tools

    • Calls internal APIs for transaction lookup, refund status, ledger reconciliation, or ticket creation.
    • These should be deterministic and idempotent.
  • Response generator

    • Turns structured tool output into a customer-facing reply.
    • Must avoid exposing internal identifiers or sensitive banking details.
  • Escalation path

    • Routes unresolved or high-risk cases to a human support queue.
    • For payments, escalation is not optional; it is part of the control plane.

Implementation

1) Define the graph state and tools

Use a typed state object so each node knows exactly what it can read and write. For payments support, keep the state narrow: user input, intent, tool output, risk flags, and final response.

from typing import TypedDict, Annotated, Optional
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool

class SupportState(TypedDict):
    messages: Annotated[list, add_messages]
    intent: Optional[str]
    risk_flag: bool
    tool_result: Optional[dict]
    response: Optional[str]

@tool
def lookup_refund_status(refund_id: str) -> dict:
    # Replace with your real payments service call
    return {"refund_id": refund_id, "status": "processing", "eta_days": 3}

@tool
def lookup_charge_status(transaction_id: str) -> dict:
    return {"transaction_id": transaction_id, "status": "declined", "reason": "insufficient_funds"}

2) Add routing and compliance nodes

The router should classify the case using simple rules first. In payments support systems, deterministic routing beats clever routing because you need predictable behavior for audits.

def route_intent(state: SupportState) -> dict:
    text = state["messages"][-1].content.lower()
    if "refund" in text:
        return {"intent": "refund_status", "risk_flag": False}
    if "charge" in text or "payment failed" in text:
        return {"intent": "charge_failed", "risk_flag": False}
    if any(x in text for x in ["card number", "cvv", "full account"]):
        return {"intent": "restricted", "risk_flag": True}
    return {"intent": "unknown", "risk_flag": True}

def compliance_gate(state: SupportState) -> str:
    if state["risk_flag"]:
        return "escalate"
    if state["intent"] == "refund_status":
        return "refund_lookup"
    if state["intent"] == "charge_failed":
        return "charge_lookup"
    return "escalate"

3) Call tools and format a safe response

This is where LangGraph shines. Each branch does one thing well: fetch data or escalate. The response node should sanitize everything before returning it to the customer.

def refund_lookup(state: SupportState) -> dict:
    # In production extract refund_id from structured parsing
    result = lookup_refund_status.invoke({"refund_id": "rf_12345"})
    return {"tool_result": result}

def charge_lookup(state: SupportState) -> dict:
    result = lookup_charge_status.invoke({"transaction_id": "tx_98765"})
    return {"tool_result": result}

def escalate(state: SupportState) -> dict:
    return {
        "response": (
            "I’ve routed this to a specialist for review. "
            "For payment security reasons I can’t complete this request here."
        )
    }

def generate_response(state: SupportState) -> dict:
    result = state["tool_result"]
    if not result:
        return {"response": "I couldn’t find a matching payment record."}

    if state["intent"] == "refund_status":
        return {
            "response": (
                f"Your refund {result['refund_id']} is currently {result['status']}. "
                f"Estimated completion time is {result['eta_days']} business days."
            )
        }

    if state["intent"] == "charge_failed":
        return {
            "response": (
                f"Your payment was declined due to {result['reason']}. "
                f"Please retry with another funding source."
            )
        }

    return {"response": "I couldn’t resolve this automatically."}

4) Wire the LangGraph workflow

This is the actual graph pattern you want in production: route first, gate second, then branch into lookup or escalation.

graph = StateGraph(SupportState)

graph.add_node("router", route_intent)
graph.add_node("refund_lookup", refund_lookup)
graph.add_node("charge_lookup", charge_lookup)
graph.add_node("escalate", escalate)
graph.add_node("generate_response", generate_response)

graph.add_edge(START, "router")
graph.add_conditional_edges(
    "router",
    compliance_gate,
    {
        "refund_lookup": "refund_lookup",
        "charge_lookup": "charge_lookup",
        "escalate": "escalate",
    },
)
graph.add_edge("refund_lookup", "generate_response")
graph.add_edge("charge_lookup", "generate_response")
graph.add_edge("generate_response", END)
graph.add_edge("escalate", END)

app = graph.compile()

result = app.invoke({
    "messages": [HumanMessage(content="What is the status of my refund?")],
    "intent": None,
    "risk_flag": False,
    "tool_result": None,
    "response": None,
})

print(result["response"])

Production Considerations

  • Auditability

    • Log every node transition with a correlation ID.
    • Store intent decisions, tool calls, and escalation reasons for compliance review.
    • For disputes and refunds, auditors will want to see why the agent chose a path.
  • Data residency

    • Keep processing inside the required region if you handle payment data tied to local regulations.
    • Do not ship raw PII or transaction details to external services unless your legal posture allows it.
    • If you use an LLM API, check where prompts and traces are stored.
  • Guardrails

    • Block card numbers, CVV values, PINs, and full bank account numbers from being echoed back.
    • Add explicit human escalation for chargebacks, fraud claims, KYC issues, and account access problems.
    • Use allowlisted tool actions only; no free-form API execution.
  • Monitoring

    • Track resolution rate by intent type.
    • Alert on high escalation rates for one payment rail or merchant category.
    • Monitor latency on backend lookups because payment support degrades fast when internal APIs slow down.

Common Pitfalls

  • Putting too much logic in one node

    • Avoid giant “agent” functions that classify intent, call tools, and write responses all at once.
    • Split routing, policy checks, tool execution, and rendering into separate nodes so failures are easier to isolate.
  • Leaking sensitive payment data into responses

    • Never echo full PANs, CVVs, bank account numbers, or internal ledger references.
    • Return masked identifiers and customer-safe summaries only.
  • Skipping escalation paths

    • Payments support cannot be fully automated.
    • If fraud signals appear or the user asks for anything regulated or ambiguous, route to a human immediately instead of guessing.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides