How to Build a customer support Agent Using LangGraph in TypeScript for investment banking
A customer support agent for investment banking handles client requests like account access issues, trade status questions, fee explanations, and document retrieval without exposing sensitive data or breaking compliance rules. It matters because in this environment, every response has to be auditable, policy-aware, and fast enough to reduce pressure on human support teams.
Architecture
- •
Ingress layer
- •Receives requests from chat, email, or internal client portals.
- •Normalizes the input into a typed state object before it enters the graph.
- •
Intent router
- •Classifies the request into support buckets like
trade_status,statement_request,fees,onboarding, orescalation. - •Sends regulated or ambiguous cases to a human reviewer.
- •Classifies the request into support buckets like
- •
Policy and compliance guard
- •Checks for restricted content: MNPI, account numbers, trade instructions, KYC data, and jurisdiction-specific restrictions.
- •Blocks unsafe responses and forces safe templates.
- •
Knowledge retrieval tool
- •Pulls approved answers from a controlled knowledge base.
- •Only retrieves content from approved sources with audit metadata.
- •
Response composer
- •Generates a concise answer using the retrieved context.
- •Applies tone controls and disclosure language required by the bank.
- •
Audit and observability layer
- •Logs every decision: classification, tool calls, retrieval sources, final response, and escalation reason.
- •Stores logs in a region that matches data residency requirements.
Implementation
1) Define the agent state and dependencies
Use a typed LangGraph state so every node knows what it can read and write. For investment banking support, keep the state small and explicit so you can audit every transition later.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
type SupportIntent =
| "trade_status"
| "statement_request"
| "fees"
| "onboarding"
| "escalation";
type SupportState = {
message: string;
intent?: SupportIntent;
riskFlag?: boolean;
retrievedContext?: string;
response?: string;
};
const SupportStateAnnotation = Annotation.Root({
message: Annotation<string>(),
intent: Annotation<SupportIntent | undefined>(),
riskFlag: Annotation<boolean | undefined>(),
retrievedContext: Annotation<string | undefined>(),
response: Annotation<string | undefined>(),
});
2) Add router, guardrail, retrieval, and response nodes
This is the core pattern: classify first, block risky content early, retrieve only from approved sources, then compose the final answer. In production you would replace the mock logic with your classifier and internal knowledge service.
const classifyIntent = async (state: typeof SupportStateAnnotation.State) => {
const text = state.message.toLowerCase();
let intent: SupportIntent = "escalation";
if (text.includes("fee") || text.includes("charge")) intent = "fees";
else if (text.includes("statement")) intent = "statement_request";
else if (text.includes("trade") || text.includes("fill")) intent = "trade_status";
else if (text.includes("onboard") || text.includes("kyc")) intent = "onboarding";
return { intent };
};
const complianceGuard = async (state: typeof SupportStateAnnotation.State) => {
const blockedPatterns = [
/\b\d{4}-\d{4}-\d{4}-\d{4}\b/, // card-like numbers
/\baccount number\b/i,
/\bmnpi\b/i,
/\binside information\b/i,
/\bbuy\s+\d+\s+shares\b/i,
/\bsell\s+\d+\s+shares\b/i,
/\bwire funds\b/i,
];
const riskFlag = blockedPatterns.some((re) => re.test(state.message));
return { riskFlag };
};
const retrieveApprovedAnswer = async (state: typeof SupportStateAnnotation.State) => {
const kbByIntent: Record<SupportIntent, string> = {
trade_status: "Trade status is available only through authorized channels after client authentication.",
statement_request: "Statements can be downloaded from the secure portal after MFA verification.",
fees: "Fee schedules are disclosed in the client agreement and product terms.",
onboarding: "KYC onboarding requires identity verification and source-of-funds documentation.",
escalation: "This request requires human review due to policy or ambiguity.",
};
return { retrievedContext: kbByIntent[state.intent ?? "escalation"] };
};
const composeResponse = async (state: typeof SupportStateAnnotation.State) => {
const response =
state.riskFlag
? "I can’t help with that request here. I’m escalating this to a licensed support specialist."
: `${state.retrievedContext} If you want, I can route this to the right support queue.`;
return { response };
};
3) Wire the graph with conditional routing
LangGraph’s StateGraph gives you deterministic control over branching. That is what you want in regulated support flows because you need predictable behavior when compliance flags fire.
const graph = new StateGraph(SupportStateAnnotation)
.addNode("classifyIntent", classifyIntent)
.addNode("complianceGuard", complianceGuard)
.addNode("retrieveApprovedAnswer", retrieveApprovedAnswer)
.addNode("composeResponse", composeResponse)
.addEdge(START, "classifyIntent")
.addEdge("classifyIntent", "complianceGuard")
.addConditionalEdges("complianceGuard", (state) => {
if (state.riskFlag) return END;
return "retrieveApprovedAnswer";
})
.addEdge("retrieveApprovedAnswer", "composeResponse")
.addEdge("composeResponse", END);
const app = graph.compile();
4) Invoke the agent and persist audit logs
For investment banking support, never treat invocation as just an LLM call. Capture input, classification result, retrieval source, final output, timestamp, user region, and escalation reason in your audit store.
async function run() {
const result = await app.invoke({
message: "What is my trade status for yesterday's equity order?",
});
console.log({
intent: result.intent,
riskFlag: result.riskFlag,
response: result.response,
});
}
run();
Production Considerations
- •Data residency
Weigh where LangGraph runs against where client data lives. If your bank has EU clients or local regulatory constraints, keep execution and logs inside approved regions only.
- •Auditability
Store every node transition with correlation IDs. Regulators will care less about model cleverness and more about whether you can reconstruct why a statement was returned or an escalation was triggered.
- •Human handoff
Build an explicit escalation path for anything involving MNPI, trade instructions, complaints with legal exposure, or identity verification failures. The graph should end quickly on those paths instead of trying to be helpful.
- •Guardrails
Use allowlisted knowledge sources only. Never let the agent answer from raw emails or unvetted documents without a policy layer that strips sensitive fields first.
Common Pitfalls
- •
Letting the model classify before policy checks
If you route first on free-form generation, you will eventually leak restricted content into downstream nodes. Always run deterministic compliance checks before any answer composition.
- •
Using one generic knowledge base for everything
Investment banking support needs source separation by product line and jurisdiction. A retail-style KB will produce wrong answers for prime brokerage, custody, derivatives ops, or capital markets workflows.
- •
Skipping audit metadata on retrieval
If you cannot show which document produced an answer and when it was last approved, your support bot becomes hard to defend internally. Attach document IDs, approval timestamps, and region tags to every retrieved snippet.
- •
Treating escalation as failure instead of design
In this domain escalation is part of correctness. A good LangGraph agent knows when not to answer and routes cleanly to a licensed human with full context attached.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit