How to Build a customer support Agent Using LangChain in Python for payments

By Cyprian AaronsUpdated 2026-04-21
customer-supportlangchainpythonpayments

A payments customer support agent answers account and transaction questions, checks payment status, explains failures, and routes risky cases to humans. That matters because payment support is not generic support: you need strict auditability, redaction of sensitive data, and controlled responses for disputes, refunds, chargebacks, and compliance-heavy workflows.

Architecture

  • Chat model

    • Use a hosted LLM with deterministic settings for support flows.
    • Keep temperature low so the agent does not invent policy or payment status.
  • Prompt layer

    • A system prompt that constrains behavior to payments support.
    • Include escalation rules for fraud, chargebacks, failed KYC, and refund disputes.
  • Tools

    • Transaction lookup tool
    • Refund status tool
    • Policy/FAQ retrieval tool
    • Human handoff tool
  • Retriever

    • Backed by approved internal docs only.
    • Used for fee policies, settlement timelines, dispute windows, and regional rules.
  • Memory / conversation state

    • Store only what is needed for the session.
    • Avoid persisting PANs, CVVs, or full bank details.
  • Guardrails and logging

    • Redact sensitive fields before they hit logs.
    • Keep an immutable audit trail of tool calls and final responses.

Implementation

1) Install the right packages

Use the current LangChain split packages. For a simple production-style setup you need the core chain primitives plus a chat model and a vector store.

pip install langchain langchain-core langchain-openai langchain-community faiss-cpu tiktoken

Set your model key in the environment:

export OPENAI_API_KEY="your-key"

2) Build a payments FAQ retriever

For support agents, retrieval is better than letting the model guess policy. Put approved payment docs into a vector store and retrieve only from that corpus.

from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_core.documents import Document

docs = [
    Document(page_content="Refunds take 5-10 business days after approval."),
    Document(page_content="Chargebacks must be disputed within 45 days."),
    Document(page_content="We never ask for full card numbers or CVV in chat."),
]

embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 2})

3) Define tools and assemble the agent

This pattern uses @tool, create_retriever_tool, ChatPromptTemplate, and create_tool_calling_agent. The agent can answer from policy docs or call operational tools when it needs live data.

from typing import Optional
from langchain_core.tools import tool, create_retriever_tool
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import AgentExecutor, create_tool_calling_agent

@tool
def lookup_payment_status(payment_id: str) -> str:
    """Look up a payment's current status."""
    # Replace with your real payments API call.
    if payment_id == "pay_123":
        return "Payment pay_123 is settled."
    return f"Payment {payment_id} is pending review."

@tool
def create_human_handoff(reason: str) -> str:
    """Escalate the conversation to a human agent."""
    return f"Handoff created for reason: {reason}"

faq_tool = create_retriever_tool(
    retriever,
    name="payments_policy_search",
    description="Search approved internal documentation for payment policies."
)

tools = [lookup_payment_status, create_human_handoff, faq_tool]

prompt = ChatPromptTemplate.from_messages([
    ("system",
     "You are a payments support agent. "
     "Do not request or repeat card numbers, CVV, or full bank details. "
     "Use tools for live status checks. "
     "Escalate fraud, chargebacks, disputes, sanctions concerns, or KYC issues."),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({
    "input": "What is the status of payment pay_123 and how long do refunds take?"
})

print(result["output"])

4) Add redaction before logging or persistence

Payments support systems should never store raw card data in traces. Redact known sensitive patterns before writing application logs or sending messages to observability systems.

import re

SENSITIVE_PATTERNS = [
    (re.compile(r"\b(?:\d[ -]*?){13,19}\b"), "[REDACTED_CARD]"),
    (re.compile(r"\b\d{3,4}\b"), "[REDACTED_CVV_OR_CODE]"),
]

def redact(text: str) -> str:
    output = text
    for pattern, replacement in SENSITIVE_PATTERNS:
        output = pattern.sub(replacement, output)
    return output

user_message = "My card is 4242 4242 4242 4242 and my CVV is 123"
safe_message = redact(user_message)
print(safe_message)

Production Considerations

  • Compliance controls

    • Block requests that try to collect PANs, CVVs, OTPs, or full account credentials.
    • Keep response templates aligned with PCI DSS scope reduction and internal compliance policy.
  • Auditability

    • Log tool invocations separately from user-visible chat content.
    • Store timestamps, request IDs, retrieved document IDs, and escalation decisions.
  • Data residency

    • Keep embeddings stores and conversation state in-region if your payment program has residency requirements.
    • Do not send regulated customer data to external services unless your legal team has cleared it.
  • Operational safety

    • Put hard timeouts on live payment API calls.
    • If a transaction lookup fails twice or returns ambiguous status codes, route to human handoff instead of guessing.

Common Pitfalls

  1. Letting the model answer from memory on payment status

    • Bad pattern: “It looks settled” without checking the ledger.
    • Fix: force live status checks through lookup_payment_status or equivalent backend APIs.
  2. Logging sensitive payment data

    • Bad pattern: dumping raw prompts into application logs or tracing tools.
    • Fix: redact before logging and keep secrets out of conversation memory entirely.
  3. Using one generic prompt for all support cases

    • Bad pattern: treating refunds, disputes, fraud alerts, and billing questions the same way.
    • Fix: add explicit escalation logic in the system prompt and separate flows for policy Q&A versus account actions.

If you want this agent to survive real payments traffic, keep it narrow. The best version of a support agent in this domain is not the most talkative one; it is the one that retrieves policy accurately, calls trusted systems for live facts, refuses unsafe requests, and leaves an auditable trail behind every answer.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides