How to Build a customer support Agent Using LangChain in TypeScript for payments
A customer support agent for payments answers account, refund, chargeback, failed transfer, and settlement questions without exposing sensitive data or inventing policy. It matters because payment support is high-volume, time-sensitive, and regulated; the agent has to be accurate, auditable, and strict about what it can and cannot do.
Architecture
- •
Chat model
- •Use a provider-backed model through LangChain’s
ChatOpenAIor equivalent. - •Keep temperature low so answers stay deterministic for policy-heavy flows.
- •Use a provider-backed model through LangChain’s
- •
Payment support tools
- •Add tools for transaction lookup, refund eligibility checks, dispute status, and merchant policy retrieval.
- •Tools should call internal APIs, never raw databases from the agent layer.
- •
Conversation memory
- •Persist only safe context: case ID, masked transaction references, user intent, and resolution state.
- •Avoid storing PANs, CVVs, bank account numbers, or full cardholder data.
- •
Policy and compliance layer
- •Enforce PCI DSS boundaries, region restrictions, and audit logging before any tool call.
- •The agent should refuse requests that require prohibited access or unsupported actions.
- •
RAG knowledge source
- •Index payment support docs: refund SLAs, chargeback timelines, settlement windows, KYC rules.
- •Use retrieval for policy answers instead of prompting the model to guess.
- •
Human handoff
- •Escalate disputes, fraud claims, legal complaints, and identity verification failures to a human queue.
- •The agent should create a structured summary for the support rep.
Implementation
- •
Install the LangChain packages you actually need
For a TypeScript service, keep the surface area small. You need the core package plus your model provider and a validator if you want stricter tool inputs.
npm install langchain @langchain/openai zod - •
Define payment-safe tools
The key pattern is: validate input, call an internal API, return only masked output. Do not let the model free-form call your backend.
import { tool } from "@langchain/core/tools"; import { z } from "zod"; const lookupTransaction = tool( async ({ transactionId }: { transactionId: string }) => { const res = await fetch(`https://payments.internal/api/transactions/${transactionId}`, { headers: { Authorization: `Bearer ${process.env.PAYMENTS_API_TOKEN}`, }, }); if (!res.ok) { throw new Error(`Transaction lookup failed: ${res.status}`); } const tx = await res.json(); return JSON.stringify({ transactionId: tx.id, status: tx.status, amount: tx.amount, currency: tx.currency, maskedCard: tx.maskedCard, createdAt: tx.createdAt, }); }, { name: "lookup_transaction", description: "Look up a payment transaction by transaction ID.", schema: z.object({ transactionId: z.string().min(6), }), } ); - •
Build the agent with LangChain’s actual executor pattern
For tool-using support flows in TypeScript,
createToolCallingAgentplusAgentExecutoris the cleanest path. Keep the system prompt narrow and explicit about refusal behavior.import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { AgentExecutor } from "langchain/agents"; import { createToolCallingAgent } from "langchain/agents"; import { tool } from "@langchain/core/tools"; import { z } from "zod"; const refundPolicyTool = tool( async ({ country }: { country: string }) => { return JSON.stringify({ country, refundWindowDays: country === "US" ? 30 : 14, notes: "Refunds require settled funds and no active dispute.", }); }, { name: "get_refund_policy", description: "Fetch refund policy rules by country.", schema: z.object({ country: z.string().min(2).max(2), }), } ); const model = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0, apiKey: process.env.OPENAI_API_KEY, }); const prompt = ChatPromptTemplate.fromMessages([ [ "system", [ "You are a payments customer support agent.", "Never request or reveal full card numbers, CVV, PINs, or bank credentials.", "Use tools for transaction status and policy checks.", "If a request involves fraud investigation, chargeback filing, legal escalation, or identity verification failure, hand off to a human.", "Answer concisely and include next steps.", ].join(" "), ], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"], ]); const tools = [lookupTransaction, refundPolicyTool]; const agent = await createToolCallingAgent({ llm: model, tools, prompt, }); export const executor = new AgentExecutor({ agent, tools, verbose: false, }); async function main() { const result = await executor.invoke({ input: "My transfer failed yesterday. Check transaction TXN_123456 and tell me whether I can get a refund in US.", }); console.log(result.output); } main().catch(console.error); - •
Add a retrieval layer for policy answers
Payment support changes often. Put policies in a retriever-backed store so the agent does not depend on stale prompt text. Use this for FAQs like settlement timing or payout holds.
A practical pattern is:
- •ingest PDFs or markdown into embeddings
- •retrieve top-k chunks
- •answer with citations or source IDs
- •block responses when no source supports the claim
Production Considerations
- •
Audit every decision path
- •Log tool calls with request IDs, case IDs, timestamps, masked identifiers, and final action taken.
- •Store enough detail to reconstruct why the agent answered a certain way during compliance review.
- •
Enforce data residency
- •Keep customer data processing inside approved regions.
- •If your payment stack is EU-only or APAC-only for certain merchants, route both retrieval and model calls accordingly.
- •
Add hard guardrails
- •Reject prompts asking for card numbers, CVV recovery, account takeover help, or bypassing verification.
- •Use deterministic rules before the LLM sees the request when possible.
- •
Monitor business metrics
- •Track containment rate, escalation rate, incorrect refund guidance rate, and average time to resolution.
- •For payments specifically, monitor false positives on fraud-related handoffs because they create operational noise fast.
Common Pitfalls
- •
Letting the model talk directly to payment systems
This is how you end up with unsafe side effects. Always wrap internal APIs in narrow tools with schema validation and authorization checks.
- •
Storing sensitive payment data in memory or logs
Do not persist full PANs, CVVs, auth codes tied to card security data, or raw bank credentials. Mask identifiers at ingestion and redact logs before they leave the service boundary.
- •
Answering policy questions without sources
Refund windows and dispute timelines vary by region and merchant contract. Back those answers with retrieval or explicit rules; otherwise you will ship confident nonsense into a regulated workflow.
- •
Skipping human escalation design
A payments support agent is not supposed to resolve everything autonomously. Build a clean handoff object with summary text, transaction reference masks payloads,, issue category,,and confidence so an operator can take over fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit