How to Build a policy Q&A Agent Using LangGraph in TypeScript for pension funds
A policy Q&A agent for pension funds answers member, trustee, and operations questions against approved policy documents, scheme rules, and internal procedures. It matters because pension teams need fast responses without drifting into legal advice, exposing sensitive member data, or giving inconsistent guidance across channels.
Architecture
Build this agent with a small number of controlled components:
- •
Policy document store
- •Approved PDFs, HTML pages, circulars, and trustee resolutions.
- •Keep versioning so every answer can be traced to the exact policy snapshot.
- •
Retriever
- •Fetches the top policy passages for a question.
- •Use metadata filters for scheme, jurisdiction, effective date, and document status.
- •
LangGraph workflow
- •Orchestrates retrieval, answer generation, and guardrails.
- •Gives you deterministic control over when the model can answer and when it must refuse.
- •
Policy answer generator
- •Produces a concise response grounded in retrieved passages.
- •Must cite sources and avoid unsupported claims.
- •
Compliance validator
- •Checks for prohibited outputs like financial advice, benefit promises, or unsupported legal interpretation.
- •Routes risky queries to human review.
- •
Audit logger
- •Stores question, retrieved docs, answer, model version, timestamps, and decision path.
- •Required for traceability in regulated environments.
Implementation
1) Define the graph state and dependencies
Use a typed state object so every node receives the same contract. For pension funds, include metadata needed for audit and escalation decisions.
import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
type PolicyQAState = {
question: string;
schemeId: string;
jurisdiction: "UK" | "IE" | "EU";
retrievedDocs: Document[];
draftAnswer?: string;
finalAnswer?: string;
needsHumanReview?: boolean;
};
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function retrievePolicies(state: PolicyQAState): Promise<Partial<PolicyQAState>> {
// Replace with your vector store / search layer.
const docs: Document[] = [
new Document({
pageContent: "Normal retirement age is 65 unless the scheme rules state otherwise.",
metadata: { source: "scheme-rules-v12.pdf", page: 14 },
}),
];
return { retrievedDocs: docs };
}
2) Add answer generation with grounded context
Keep the prompt strict. The model should only answer from retrieved text and should cite sources in-line.
async function draftAnswer(state: PolicyQAState): Promise<Partial<PolicyQAState>> {
const context = state.retrievedDocs
.map((doc) => `Source: ${doc.metadata.source}\nText: ${doc.pageContent}`)
.join("\n\n");
const messages = [
{
role: "system",
content:
"You answer pension policy questions using only the provided sources. " +
"If the sources do not support an answer, say you cannot confirm it. " +
"Do not provide legal or financial advice.",
},
{
role: "user",
content: `Question: ${state.question}\n\nSources:\n${context}`,
},
] as const;
const response = await llm.invoke(messages);
return { draftAnswer: response.content as string };
}
3) Add a compliance gate before returning an answer
For pension funds, this is where you block advice-like outputs and ambiguous cases. If the query touches transfers, tax treatment, divorce orders, complaints handling, or protected characteristics, route it to review.
async function complianceGate(state: PolicyQAState): Promise<Partial<PolicyQAState>> {
const riskyPatterns = [
/should i/i,
/best option/i,
/transfer/i,
/tax/i,
/legal/i,
/appeal/i,
/complaint/i,
];
const needsHumanReview =
riskyPatterns.some((pattern) => pattern.test(state.question)) ||
!state.draftAnswer ||
state.draftAnswer.toLowerCase().includes("cannot confirm");
return { needsHumanReview };
}
async function finalizeAnswer(state: PolicyQAState): Promise<Partial<PolicyQAState>> {
if (state.needsHumanReview) {
return {
finalAnswer:
"This question needs review by the pension administration team. The system cannot provide a confirmed policy answer from the available sources.",
};
}
return {
finalAnswer:
`${state.draftAnswer}\n\nSources:\n` +
state.retrievedDocs
.map((doc) => `- ${doc.metadata.source} (page ${doc.metadata.page})`)
.join("\n"),
};
}
4) Wire the graph together and run it
This is the actual LangGraph pattern you want in production. The flow is explicit: retrieve first, generate second, validate third, then finalize.
const graph = new StateGraph<PolicyQAState>()
.addNode("retrievePolicies", retrievePolicies)
.addNode("draftAnswer", draftAnswer)
.addNode("complianceGate", complianceGate)
.addNode("finalizeAnswer", finalizeAnswer)
.addEdge(START, "retrievePolicies")
.addEdge("retrievePolicies", "draftAnswer")
.addEdge("draftAnswer", "complianceGate")
.addConditionalEdges("complianceGate", (state) =>
state.needsHumanReview ? END : "finalizeAnswer"
.addEdge("finalizeAnswer", END);
const app = graph.compile();
const result = await app.invoke({
question: "What is the normal retirement age under this scheme?",
schemeId:
"scheme-a",
jurisdiction:
"UK",
retrievedDocs:
[],
});
console.log(result.finalAnswer);
Production Considerations
- •Deploy inside your data boundary
Store member data and policy indices in-region. Pension funds often have residency constraints tied to trustee policy or local regulation.
- •Log every decision path
Persist question text, document IDs returned by retrieval, model version, node outputs, and whether compliance routed to human review. This is what you need when an auditor asks why a response was produced.
- •Use strict access controls
Separate member-facing queries from internal trustee/admin workflows. A benefits administrator may see more than a member-facing bot should ever expose.
- •Add deterministic refusal rules
If retrieval returns no supporting source or if the query asks for advice rather than policy explanation, return a refusal plus escalation path. Do not let the model improvise.
Common Pitfalls
- •
Letting the model answer without evidence
- •Fix this by requiring retrieved passages before
draftAnswer. - •If no source supports the claim, force human review.
- •Fix this by requiring retrieved passages before
- •
Mixing general HR-style policies with scheme rules
- •Pension questions need scheme-specific metadata like effective date, jurisdiction, employer group, and rule version.
- •Filter retrieval on those fields or you will return stale guidance.
- •
Skipping auditability
- •If you only store the final answer, you cannot defend it later.
- •Store retrieved documents, prompt version, model name, timestamps, and escalation decisions.
- •
Treating all questions as safe Q&A
- •Pension queries often cross into regulated territory: transfers, taxation, complaints, divorce orders, ill-health retirement.
- •Build explicit risk detection and route those cases out of automation quickly.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit