How to Build a customer support Agent Using LangGraph in Python for fintech

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphpythonfintech

A customer support agent for fintech handles account questions, transaction disputes, card status checks, fee explanations, and routing to human support when the request crosses policy or risk boundaries. It matters because support is not just a UX problem in fintech; it is a compliance, auditability, and customer trust problem.

Architecture

  • Ingress layer

    • Accepts chat messages from web, mobile, or internal tools.
    • Normalizes user input into a single request shape.
  • State model

    • Stores conversation context, customer metadata, and routing decisions.
    • Keeps the graph deterministic and inspectable.
  • Policy router

    • Decides whether the request can be answered by the agent.
    • Escalates sensitive cases like chargebacks, fraud claims, KYC changes, or complaints.
  • Tool layer

    • Connects to approved fintech systems: account lookup, transaction search, card status, ticket creation.
    • Must enforce least privilege and tenant isolation.
  • Response composer

    • Turns tool outputs into customer-facing answers.
    • Applies compliance language and avoids leaking internal data.
  • Audit logger

    • Persists state transitions, tool calls, and final responses.
    • Needed for investigations, model governance, and regulatory review.

Implementation

1. Define the graph state and helper functions

For fintech support, keep state explicit. You want every decision path to be explainable later.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage

class SupportState(TypedDict):
    messages: Annotated[list, add_messages]
    intent: str
    risk_level: str
    answer: str

def classify_intent(state: SupportState) -> dict:
    text = state["messages"][-1].content.lower()
    if "chargeback" in text or "fraud" in text:
        return {"intent": "high_risk", "risk_level": "high"}
    if "balance" in text or "fee" in text or "card" in text:
        return {"intent": "account_support", "risk_level": "low"}
    return {"intent": "general_support", "risk_level": "medium"}

def answer_support(state: SupportState) -> dict:
    msg = state["messages"][-1].content
    return {
        "answer": f"I can help with that request: {msg}. If you need account-specific details, I'll route to an approved backend tool."
    }

def escalate(state: SupportState) -> dict:
    return {
        "answer": (
            "This request needs human review due to policy constraints. "
            "I’ve created an escalation path for support."
        )
    }

2. Build routing with StateGraph and conditional edges

This is the core LangGraph pattern: classify first, then route based on risk. add_conditional_edges keeps the policy visible instead of burying it inside a prompt.

def route_by_risk(state: SupportState) -> str:
    if state["risk_level"] == "high":
        return "escalate"
    return "answer"

builder = StateGraph(SupportState)

builder.add_node("classify", classify_intent)
builder.add_node("answer", answer_support)
builder.add_node("escalate", escalate)

builder.add_edge(START, "classify")
builder.add_conditional_edges(
    "classify",
    route_by_risk,
    {
        "answer": "answer",
        "escalate": "escalate",
    },
)
builder.add_edge("answer", END)
builder.add_edge("escalate", END)

graph = builder.compile()

3. Invoke the graph with a real customer message

Use a structured state payload. In production you would inject customer tier, jurisdiction, and residency flags before invoking the graph.

initial_state = {
    "messages": [HumanMessage(content="Why was I charged a fee this month?")],
    "intent": "",
    "risk_level": "",
    "answer": "",
}

result = graph.invoke(initial_state)

print(result["intent"])
print(result["risk_level"])
print(result["answer"])

4. Add a tool-backed branch for approved account lookups

When you need real account data, keep it behind an explicit node. Do not let the model free-form query your core banking system.

def fetch_account_summary(state: SupportState) -> dict:
    # Replace with a real internal API call using service credentials.
    # Enforce tenant checks and jurisdiction filters here.
    summary = {
        "available_balance": "$1,240.55",
        "recent_fee": "$12 monthly maintenance fee",
        "status": "active",
    }
    return {
        "answer": (
            f"Your account is {summary['status']}. "
            f"Available balance is {summary['available_balance']}. "
            f"Recent charge: {summary['recent_fee']}."
        )
    }

If you want this branch only for low-risk intents:

builder = StateGraph(SupportState)
builder.add_node("classify", classify_intent)
builder.add_node("lookup", fetch_account_summary)
builder.add_node("escalate", escalate)

builder.add_edge(START, "classify")
builder.add_conditional_edges(
    "classify",
    lambda s: "lookup" if s["intent"] == "account_support" else ("escalate" if s["risk_level"] == "high" else "lookup"),
    {"lookup": "lookup", "escalate": "escalate"},
)
builder.add_edge("lookup", END)
builder.add_edge("escalate", END)

Production Considerations

  • Compliance controls

    • Block advice on regulated topics like fraud claims resolution or legal complaints unless routed through approved workflows.
    • Log every escalation decision with timestamps and immutable IDs for audit review.
  • Data residency

    • Keep customer PII in-region and avoid sending raw account data to external model providers unless your legal team has approved that flow.
    • Redact identifiers before they enter prompts or tracing systems.
  • Monitoring

    • Track escalation rate, hallucination rate on account-specific answers, tool failure rate, and median time-to-resolution.
    • Alert when high-risk intents are answered without human handoff.
  • Guardrails

    • Add policy checks before any node that touches customer data.
    • Use allowlisted tools only; no arbitrary function calling into internal systems.

Common Pitfalls

  • Putting compliance logic only in prompts

    • Prompts are not controls. Put policy in graph nodes and routing functions so it is testable and auditable.
  • Letting the agent access raw banking APIs directly

    • Wrap every backend call in a dedicated node with validation, authorization checks, and field-level filtering.
  • Ignoring conversation state shape

    • If your TypedDict is sloppy, your graph becomes hard to debug. Keep messages, intent labels, risk flags, and final response fields explicit from day one.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides