How to Build a policy Q&A Agent Using LangGraph in TypeScript for investment banking
A policy Q&A agent for investment banking answers internal questions about trading, compliance, research, onboarding, surveillance, and client-facing procedures using approved policy sources only. It matters because bankers need fast answers without drifting into unapproved advice, and every response has to be auditable, versioned, and safe enough to stand up in front of compliance.
Architecture
- •
User interface / API layer
- •Receives questions from bankers, compliance teams, or operations.
- •Passes user identity, desk, region, and request metadata into the graph state.
- •
Policy retrieval layer
- •Pulls from approved sources only: internal policy docs, control manuals, SOPs, and regulatory interpretations.
- •Uses vector search or keyword retrieval with document-level metadata like jurisdiction, effective date, and owner.
- •
LangGraph orchestration
- •Routes the request through classification, retrieval, answer drafting, and escalation nodes.
- •Keeps the flow deterministic enough for audit and review.
- •
Guardrail / compliance layer
- •Blocks unsupported requests like legal advice or trading recommendations.
- •Forces escalation when confidence is low or the question touches restricted topics.
- •
Audit logging layer
- •Stores the question, retrieved documents, model output, policy version IDs, and decision path.
- •Supports post-trade review, compliance testing, and incident investigation.
Implementation
- •
Define the graph state and supporting types
Keep the state explicit. In banking systems you want to know exactly what was seen, retrieved, answered, and escalated.
import { StateGraph, Annotation } from "@langchain/langgraph"; import { ChatOpenAI } from "@langchain/openai"; type PolicyDoc = { id: string; title: string; content: string; jurisdiction: string; effectiveDate: string; }; const GraphState = Annotation.Root({ question: Annotation<string>(), userRole: Annotation<string>(), jurisdiction: Annotation<string>(), retrievedDocs: Annotation<PolicyDoc[]>({ default: () => [], reducer: (_, next) => next, }), draftAnswer: Annotation<string>({ default: () => "", reducer: (_, next) => next, }), shouldEscalate: Annotation<boolean>({ default: () => false, reducer: (_, next) => next, }), auditTrail: Annotation<string[]>({ default: () => [], reducer: (prev, next) => [...prev, ...next], }), }); type PolicyState = typeof GraphState.State; const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0, }); - •
Create the core nodes
Use small nodes with one job each. That makes it easier to test retrieval quality separately from response generation.
async function classifyRisk(state: PolicyState): Promise<Partial<PolicyState>> { const q = state.question.toLowerCase(); const restricted = q.includes("should i trade") || q.includes("recommend") || q.includes("buy") || q.includes("sell") || q.includes("legal advice"); return { shouldEscalate: restricted, auditTrail: [`classified:${restricted ? "restricted" : "standard"}`], }; } async function retrievePolicies(state: PolicyState): Promise<Partial<PolicyState>> { // Replace with your approved retriever backed by internal policy store. const docs: PolicyDoc[] = [ { id: "POL-AML-004", title: "AML Escalation Procedure", content: "If a transaction is unusual or lacks economic purpose, escalate to Financial Crime Compliance immediately.", jurisdiction: state.jurisdiction, effectiveDate: "2025-01-01", }, { id: "POL-MKT-012", title: "Research Independence Standard", content: "Research staff must not coordinate publication timing with investment banking deal teams.", jurisdiction: state.jurisdiction, effectiveDate: "2025-01-01", }, ]; return { retrievedDocs: docs, auditTrail: [`retrieved:${docs.map((d) => d.id).join(",")}`], }; } async function draftAnswer(state: PolicyState): Promise<Partial<PolicyState>> { const context = state.retrievedDocs .map((d) => `[${d.id}] ${d.title}: ${d.content}`) .join("\n"); const prompt = `
You are a policy assistant for investment banking. Answer only from the provided policy context. If the question is outside policy scope or requires judgment on a deal/trade/client matter, say to escalate to Compliance.
Question: ${state.question}
Policy Context: ${context} `;
const res = await llm.invoke(prompt);
return {
draftAnswer:
typeof res.content === "string" ? res.content : JSON.stringify(res.content),
auditTrail: ["drafted_answer"],
};
}
async function escalate(state: PolicyState): Promise<Partial<PolicyState>> { return { draftAnswer: "This question needs human review by Compliance before any action is taken.", auditTrail: ["escalated_to_compliance"], }; }
3. **Wire routing with `StateGraph`, `addNode`, `addEdge`, and `addConditionalEdges`**
This is where LangGraph earns its keep. The routing logic stays explicit instead of hiding inside a giant prompt.
```typescript
const graph = new StateGraph(GraphState)
.addNode("classifyRisk", classifyRisk)
.addNode("retrievePolicies", retrievePolicies)
.addNode("draftAnswer", draftAnswer)
.addNode("escalate", escalate)
.addEdge("__start__", "classifyRisk")
.addConditionalEdges(
"classifyRisk",
(state) => (state.shouldEscalate ? "escalate" : "retrievePolicies"),
{
escalate: "escalate",
retrievePolicies: "retrievePolicies",
}
)
.addEdge("retrievePolicies", "draftAnswer")
.addEdge("draftAnswer", "__end__")
.addEdge("escalate", "__end__");
const app = graph.compile();
- •
Run it with banking-relevant metadata
In production you should pass user identity and jurisdiction through the request boundary. That lets you enforce residency rules and produce an audit trail tied to a specific desk or entity.
async function main() { const result = await app.invoke({ question: "Can I share this research note with the investment banking team before publication?", userRole: "analyst", jurisdiction: "UK", retrievedDocs: [], draftAnswer: "", shouldEscalate: false, auditTrail: [], }); console.log(result.draftAnswer); console.log(result.auditTrail);
}
main().catch(console.error);
## Production Considerations
- **Data residency**
- Keep retrieval indexes and logs in-region for UK/EU/APAC desks when required.
- Do not send restricted client data or non-public deal information to external services unless your control framework explicitly allows it.
- **Auditability**
- Persist every run with question text, user identity, retrieved doc IDs, model version, prompt template hash, and final route taken.
- Compliance teams will ask why an answer was returned; make that answer reconstructible.
- **Guardrails**
- Add deterministic blocks for trading recommendations, MNPI handling advice beyond policy text, legal interpretation requests, and anything that looks like client-specific judgment.
- Escalate low-confidence answers instead of hallucinating a policy citation.
- **Monitoring**
- Track escalation rate by desk and region.
- Alert on repeated queries about restricted topics because that often signals bad UX or a control gap.
## Common Pitfalls
1. **Using generic RAG without policy metadata**
If you do not tag documents by jurisdiction, business line, owner approval date, and effective date, you will return stale or non-applicable guidance. Fix this by filtering retrieval before generation.
2. **Letting the model answer everything**
A policy agent is not a free-form chatbot. If the question touches trades, clients, disclosures, research timing, or MNPI handling outside clear policy text, route to escalation instead of asking the model to “do its best.”
3. **Skipping traceability**
Investment banking controls depend on being able to replay decisions. Store node outputs and document IDs per run; otherwise you cannot explain why an answer was produced during an audit or incident review.
---
## Keep learning
- [The complete AI Agents Roadmap](/blog/ai-agents-roadmap-2026) — my full 8-step breakdown
- [Free: The AI Agent Starter Kit](/starter-kit) — PDF checklist + starter code
- [Work with me](/contact) — I build AI for banks and insurance companies
*By Cyprian Aarons, AI Consultant at [Topiax](https://topiax.xyz).*
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit