How to Build a policy Q&A Agent Using LangGraph in TypeScript for banking
A policy Q&A agent answers questions like “Can I waive this fee?”, “What’s the chargeback window?”, or “Is this customer eligible for a fee reversal?” by retrieving the right policy, checking the request against internal rules, and returning a grounded answer. In banking, that matters because policy drift, inconsistent responses, and weak audit trails create compliance risk fast.
Architecture
- •
Ingress API
- •Receives user questions from an internal portal, CRM, or ops tool.
- •Passes along metadata like user role, region, product line, and case ID.
- •
Policy Retriever
- •Pulls relevant policy snippets from a controlled knowledge base.
- •Use vector search plus metadata filters for jurisdiction, product, and effective date.
- •
Decision Node
- •Evaluates whether the question is answerable from policy alone.
- •Routes to either a grounded answer path or a fallback/escalation path.
- •
Response Generator
- •Produces concise answers with citations to policy sections.
- •Must avoid inventing exceptions or legal interpretations.
- •
Audit Logger
- •Stores question, retrieved sources, decision path, model output, and timestamps.
- •Required for compliance review and post-incident analysis.
- •
Guardrail Layer
- •Enforces redaction, disallowed topics, and confidence thresholds.
- •Blocks answers that could expose regulated data or unsupported advice.
Implementation
1) Define the graph state and basic nodes
Use LangGraph’s StateGraph with a typed state object. Keep the state small and explicit; banking systems need traceability more than clever abstractions.
import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
type PolicyDoc = {
id: string;
title: string;
section: string;
text: string;
};
type AgentState = {
question: string;
userRole: "agent" | "supervisor" | "compliance";
region: string;
retrievedDocs: PolicyDoc[];
answer?: string;
needsEscalation?: boolean;
};
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function retrievePolicies(state: AgentState): Promise<Partial<AgentState>> {
// Replace with vector DB + metadata filters in production
const docs: PolicyDoc[] = [
{
id: "card-chargeback-01",
title: "Chargeback Policy",
section: "2.3",
text: "Chargebacks must be filed within 120 days of transaction posting date.",
},
{
id: "fee-waiver-04",
title: "Fee Waiver Policy",
section: "4.1",
text: "Fee waivers require supervisor approval unless customer is premium tier.",
},
];
return { retrievedDocs: docs };
}
2) Add a routing node that decides whether to answer or escalate
For banking workflows, don’t let every query hit the model. A simple decision gate catches unsupported requests before they become bad advice.
async function routeQuestion(
state: AgentState
): Promise<Partial<AgentState>> {
const prompt = `
You are routing a banking policy question.
Return JSON with keys:
- needsEscalation (boolean)
- reason (string)
Question: ${state.question}
Policies:
${state.retrievedDocs.map(d => `- ${d.title} ${d.section}: ${d.text}`).join("\n")}
`;
const response = await llm.invoke(prompt);
const parsed = z.object({
needsEscalation: z.boolean(),
reason: z.string(),
}).parse(JSON.parse(response.content as string));
return { needsEscalation: parsed.needsEscalation };
}
3) Generate grounded answers with citations
The answer node should only use retrieved policy text. If the model cannot ground the answer in those snippets, escalate instead of guessing.
async function generateAnswer(
state: AgentState
): Promise<Partial<AgentState>> {
const context = state.retrievedDocs
.map((d) => `[${d.id}] ${d.title} ${d.section}: ${d.text}`)
.join("\n");
const prompt = `
Answer the banking policy question using only the provided policy text.
If the policy does not support an answer, say "I need escalation."
Question: ${state.question}
Policy context:
${context}
Return a short answer with citations in brackets like [card-chargeback-01].
`;
const result = await llm.invoke(prompt);
return { answer: result.content as string };
}
async function escalate(state: AgentState): Promise<Partial<AgentState>> {
return {
answer:
"I need escalation. This question requires manual review against current banking policy.",
needsEscalation: true,
};
}
Build and run the graph
This is the actual LangGraph pattern you want in production. Use conditional edges so routing stays deterministic and auditable.
const graph = new StateGraph<AgentState>()
.addNode("retrievePolicies", retrievePolicies)
.addNode("routeQuestion", routeQuestion)
.addNode("generateAnswer", generateAnswer)
.addNode("escalate", escalate)
.addEdge(START, "retrievePolicies")
.addEdge("retrievePolicies", "routeQuestion")
.addConditionalEdges("routeQuestion", (state) =>
state.needsEscalation ? "escalate" : "generateAnswer"
)
.addEdge("generateAnswer", END)
.addEdge("escalate", END)
.compile();
const result = await graph.invoke({
question: "Can I waive this overdraft fee for a premium customer?",
userRole: "agent",
region: "US",
retrievedDocs: [],
});
console.log(result.answer);
Production Considerations
- •
Auditability
- •Log every graph run with input question, retrieved document IDs, routing outcome, final answer, and model version.
- •In banking audits, you need to show why a response was produced, not just what it said.
- •
Data residency
- •Keep retrieval indexes and logs in-region if policies are jurisdiction-specific.
- •If you serve EU or UK operations, don’t ship raw customer data across borders just to call a hosted model.
- •
Guardrails
- •Redact account numbers, PANs, SSNs, and other regulated identifiers before LLM calls.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit