How to Build a policy Q&A Agent Using LangGraph in TypeScript for wealth management
A policy Q&A agent for wealth management answers client and advisor questions against internal policy, product, and compliance documents. It matters because most failures here are not “wrong answers” in the abstract — they are suitability issues, inconsistent disclosures, or responses that create regulatory risk.
Architecture
Build this agent with a narrow, auditable pipeline:
- •
User input layer
- •Accepts questions from advisors or client-service staff.
- •Captures metadata like jurisdiction, client segment, and channel.
- •
Policy retrieval layer
- •Pulls from approved sources only: product sheets, fee schedules, suitability rules, escalation playbooks.
- •Uses vector search or keyword search with document-level citations.
- •
LangGraph orchestration
- •Controls the flow: classify question, retrieve evidence, draft answer, validate guardrails, return final response.
- •Keeps state explicit so every step is inspectable.
- •
Compliance validator
- •Checks for restricted language, missing disclosures, jurisdiction mismatches, and advice-like phrasing.
- •Can force escalation instead of answering.
- •
Audit logger
- •Stores question, retrieved sources, model output, validation result, and final answer.
- •Needed for supervision review and incident analysis.
- •
Response formatter
- •Produces concise answers with citations and next-step instructions.
- •Keeps the agent from sounding like it is giving personalized investment advice.
Implementation
1) Define the graph state and node contracts
Use a typed state object so every node knows what it can read and write. In wealth management systems, that matters because you need to preserve evidence and decision traces end to end.
import { Annotation } from "@langchain/langgraph";
export type PolicyDoc = {
id: string;
title: string;
content: string;
sourceUrl?: string;
};
export type QAState = typeof QAStateAnnotation.State;
export const QAStateAnnotation = Annotation.Root({
question: Annotation<string>(),
jurisdiction: Annotation<string>(),
clientSegment: Annotation<string>(),
retrievedDocs: Annotation<PolicyDoc[]>({
default: () => [],
reducer: (left, right) => [...left, ...right],
}),
draftAnswer: Annotation<string | null>({
default: () => null,
reducer: (_, right) => right,
}),
complianceFlagged: Annotation<boolean>({
default: () => false,
reducer: (_, right) => right,
}),
});
2) Add retrieval and compliance nodes
Keep retrieval deterministic enough to audit. Then add a validator that blocks answers when the question crosses into advice or conflicts with policy.
import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
async function retrievePolicies(state: QAState): Promise<Partial<QAState>> {
const docs = await fakePolicySearch(state.question, state.jurisdiction);
return { retrievedDocs: docs };
}
async function draftAnswer(state: QAState): Promise<Partial<QAState>> {
const context = state.retrievedDocs
.map((d) => `TITLE: ${d.title}\nCONTENT: ${d.content}`)
.join("\n\n");
const prompt = `
You answer wealth management policy questions using only the provided policy context.
If the answer is not fully supported, say you need escalation.
Question: ${state.question}
Jurisdiction: ${state.jurisdiction}
Client segment: ${state.clientSegment}
Policy context:
${context}
`;
const response = await llm.invoke(prompt);
return { draftAnswer: String(response.content) };
}
async function validateCompliance(state: QAState): Promise<Partial<QAState>> {
const text = state.draftAnswer ?? "";
const blocked =
/guaranteed returns|best investment|should buy|should sell/i.test(text) ||
state.retrievedDocs.length === 0;
return { complianceFlagged: blocked };
}
3) Wire the graph with conditional routing
This is where LangGraph helps more than a single prompt chain. The graph can stop on validation failure and route to an escalation path instead of forcing an answer.
const graph = new StateGraph(QAStateAnnotation)
.addNode("retrievePolicies", retrievePolicies)
.addNode("draftAnswer", draftAnswer)
.addNode("validateCompliance", validateCompliance)
.addNode("escalate", async () => ({
draftAnswer:
"I can’t provide a policy-grounded answer for this request. Please route it to Compliance or a licensed advisor.",
complianceFlagged: true,
}))
graph.addEdge(START, "retrievePolicies");
graph.addEdge("retrievePolicies", "draftAnswer");
graph.addEdge("draftAnswer", "validateCompliance");
graph.addConditionalEdges("validateCompliance", (state) =>
state.complianceFlagged ? "escalate" : END
);
graph.addEdge("escalate", END);
export const app = graph.compile();
4) Invoke with audit-friendly inputs
Pass in the minimum necessary personal data. For wealth management workflows, keep client PII out of prompts unless you have a documented reason and residency controls in place.
const result = await app.invoke({
question: "Can we waive advisory fees for legacy clients in Ontario?",
jurisdiction: "CA-ON",
});
console.log(result.draftAnswer);
console.log(result.retrievedDocs.map((d) => d.id));
Production Considerations
- •
Data residency
- •Keep policy indexes and logs in-region if your firm has Canadian, UK, EU, or US segmentation requirements.
- •Do not send client identifiers to external model providers unless your legal and security teams have approved that path.
- •
Auditability
- •Store every run with graph state snapshots:
- •question
- •retrieved document IDs
- •model version
- •validation result
- •final response
- •This is what lets you reconstruct why the agent answered or escalated.
- •Store every run with graph state snapshots:
- •
Guardrails
- •Block language that looks like personalized advice:
- •“you should buy”
- •“this is suitable”
- •“guaranteed”
- •“safe returns”
- •Require citations for any policy claim. If no citation exists, escalate.
- •Block language that looks like personalized advice:
- •
Monitoring
| Signal | Why it matters | Action |
|---|---|---|
| Escalation rate | High rates may mean weak retrieval or outdated policies | Review source coverage |
| Citation coverage | Low coverage means unsupported answers | Tighten retrieval thresholds |
| Jurisdiction mismatches | Dangerous in regulated advice workflows | Block cross-region responses |
| Human override rate | Shows where policy logic fails | Update guardrails and prompts |
Common Pitfalls
- •
Using a generic chatbot prompt instead of explicit graph states
That makes audits painful. Use LangGraph state so you can see exactly what was retrieved, drafted, and blocked.
- •
Letting retrieval pull from unapproved content
Wealth management policies change often. Restrict sources to approved documents only; otherwise the agent will confidently quote stale or non-compliant material.
- •
Treating compliance as a post-processing regex filter only
Regex catches obvious bad phrases but misses context. Combine rules with a validation node that can force escalation when evidence is weak or jurisdictional rules are unclear.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit