How to Build a policy Q&A Agent Using LangChain in TypeScript for retail banking
A policy Q&A agent for retail banking answers customer-facing and internal policy questions from approved documents, without making staff or customers hunt through PDFs, SharePoint folders, or intranet pages. It matters because policy drift, inconsistent answers, and slow escalations create compliance risk, poor customer experience, and operational noise.
Architecture
- •
Document ingestion layer
- •Pull policy PDFs, FAQs, product terms, and procedure docs from approved sources.
- •Normalize them into text chunks with metadata like
policy_id,version,jurisdiction, andeffective_date.
- •
Embedding + vector store
- •Convert chunks into embeddings and store them in a retrieval index.
- •Use metadata filters so the agent only searches the right region, product line, or document version.
- •
Retriever
- •Expose a
VectorStoreRetrieverthat returns only the most relevant policy passages. - •Keep retrieval narrow; banking policy Q&A should favor precision over broad recall.
- •Expose a
- •
LLM answer chain
- •Use a chat model with a prompt that forces grounded answers and citations.
- •The model should answer only from retrieved policy context and refuse unsupported claims.
- •
Guardrails layer
- •Add rules for PII handling, prohibited advice, escalation triggers, and confidence thresholds.
- •Route ambiguous cases to a human reviewer or case management queue.
- •
Audit logging
- •Log the user question, retrieved document IDs, model version, response text, and decision path.
- •This is mandatory for traceability in regulated environments.
Implementation
- •Load policy documents and split them into chunks
Use DirectoryLoader for local files during development. In production, replace this with your controlled document source and preserve metadata for audit and filtering.
import { DirectoryLoader } from "@langchain/community/document_loaders/fs/directory";
import { TextLoader } from "langchain/document_loaders/fs/text";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
async function loadPolicyDocs() {
const loader = new DirectoryLoader("./policies", {
".txt": (path) => new TextLoader(path),
});
const docs = await loader.load();
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 800,
chunkOverlap: 120,
});
const splitDocs = await splitter.splitDocuments(docs);
return splitDocs.map((doc) => ({
...doc,
metadata: {
...doc.metadata,
source_system: "policy_repo",
jurisdiction: doc.metadata?.jurisdiction ?? "unknown",
},
}));
}
- •Build the vector store and retriever
For a bank, use a vector store that fits your deployment constraints. If data residency matters, keep embeddings and indexes in-region.
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
async function buildRetriever() {
const docs = await loadPolicyDocs();
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-large",
});
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);
return vectorStore.asRetriever({
k: 4,
filter: (doc) => doc.metadata?.jurisdiction === "UK",
});
}
- •Create a grounded Q&A chain with citations
This is the core pattern. Use ChatOpenAI, ChatPromptTemplate, createStuffDocumentsChain, and createRetrievalChain. The prompt must instruct the model to answer only from context and cite sources.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createRetrievalChain } from "langchain/chains/retrieval";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a retail banking policy assistant.
Answer only using the provided context.
If the context does not contain the answer, say you cannot find it in current policy.
Always include cited source names from the context.
Do not provide legal advice or override official policy.`,
],
["human", "Question: {input}\n\nContext:\n{context}"],
]);
async function buildPolicyQaChain() {
const retriever = await buildRetriever();
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const combineDocsChain = await createStuffDocumentsChain({
llm,
prompt,
});
return createRetrievalChain({
retriever,
combineDocsChain,
});
}
async function askPolicyQuestion(question: string) {
const chain = await buildPolicyQaChain();
const result = await chain.invoke({ input: question });
return result.answer;
}
- •Add an escalation gate for risky questions
Retail banking needs explicit handling for complaints, vulnerability indicators, fraud scenarios, account closures, sanctions-related queries, and anything involving personal financial advice. Use a lightweight classifier before answering.
const RISKY_PATTERNS = [
/complaint/i,
/fraud/i,
/sanction/i,
];
function shouldEscalate(question: string): boolean {
return RISKY_PATTERNS.some((pattern) => pattern.test(question));
}
async function safeAsk(question: string) {
if (shouldEscalate(question)) {
return {
answer:
"I can’t handle this directly. Please route this to a trained banker or compliance queue.",
escalated: true,
};
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit