How to Build a policy Q&A Agent Using LangChain in Python for investment banking
A policy Q&A agent for investment banking answers internal questions about compliance, trading restrictions, research policies, KYC/AML procedures, and escalation rules. It matters because bankers need fast answers, but the firm also needs traceability, source grounding, and tight control over what the assistant can say.
Architecture
- •
Policy document loader
- •Ingests PDFs, Word docs, SharePoint exports, or internal wiki pages.
- •Normalizes them into
Documentobjects with metadata likepolicy_name,version,owner, andjurisdiction.
- •
Chunking and embedding pipeline
- •Splits policies into retrieval-friendly chunks using
RecursiveCharacterTextSplitter. - •Converts chunks into vectors with an embedding model such as
OpenAIEmbeddingsor a private model hosted in your environment.
- •Splits policies into retrieval-friendly chunks using
- •
Vector store retriever
- •Uses a vector database like FAISS, Pinecone, or pgvector.
- •Returns only the most relevant policy sections for a question.
- •
Grounded answer chain
- •Uses LangChain’s
RetrievalQAor a modern LCEL chain built fromChatPromptTemplate, retriever, and LLM. - •Forces answers to cite retrieved policy text instead of improvising.
- •Uses LangChain’s
- •
Guardrails and refusal logic
- •Rejects questions outside policy scope.
- •Blocks advice that looks like legal interpretation, deal-specific judgment, or confidential client guidance.
- •
Audit logging layer
- •Stores question, retrieved sources, model response, user identity, timestamp, and policy version.
- •This is non-negotiable in investment banking.
Implementation
- •Load policies and build the vector index
Use a loader that matches your source system. For local PDFs, PyPDFLoader is fine; for production you’ll usually wrap SharePoint or an internal document service. The important part is preserving metadata for audit and version control.
from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
loader = PyPDFLoader("policies/market_conduct_policy.pdf")
documents = loader.load()
splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=120,
)
chunks = splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
vectorstore = FAISS.from_documents(chunks, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
- •Build a grounded prompt and retrieval chain
For banking use cases, don’t let the model answer from memory. Make it quote the retrieved context and say “I don’t know” when the policy text does not support an answer. ChatPromptTemplate plus create_stuff_documents_chain is the cleanest pattern.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains.retrieval import create_retrieval_chain
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a policy Q&A assistant for investment banking. "
"Answer only using the provided context. "
"If the context does not contain the answer, say you cannot find it in the policy. "
"Do not provide legal advice or speculate."),
("human",
"Question: {input}\n\nContext:\n{context}")
])
document_chain = create_stuff_documents_chain(llm, prompt)
qa_chain = create_retrieval_chain(retriever, document_chain)
result = qa_chain.invoke({"input": "Can a banker discuss pending deals with research?"})
print(result["answer"])
- •Add source display and audit logging
In regulated environments you need more than a plain answer. You want the exact sources returned by retrieval so compliance can review what the model saw. The chain returns documents under context, which you should persist alongside the response.
import json
from datetime import datetime
question = "What is the escalation process for suspected AML activity?"
result = qa_chain.invoke({"input": question})
audit_record = {
"timestamp_utc": datetime.utcnow().isoformat(),
"user_id": "jdoe",
"question": question,
"answer": result["answer"],
"sources": [
{
"source": doc.metadata.get("source"),
"page": doc.metadata.get("page"),
"policy_name": doc.metadata.get("policy_name"),
"version": doc.metadata.get("version"),
}
for doc in result["context"]
],
}
with open("audit_log.jsonl", "a") as f:
f.write(json.dumps(audit_record) + "\n")
- •Wrap it with policy-specific guardrails
You should filter out questions that ask for deal advice, personal trading guidance, client-specific recommendations, or anything outside policy scope. A simple keyword gate is not enough; use intent classification or rules plus human escalation for risky queries.
def is_in_scope(question: str) -> bool:
blocked_terms = [
"should i buy", "trade this", "client recommendation",
"inside information", "how to evade", "bypass"
]
q = question.lower()
return not any(term in q for term in blocked_terms)
def answer_policy_question(question: str):
if not is_in_scope(question):
return {
"answer": (
"This request is outside policy Q&A scope. "
"Escalate to Compliance or Legal."
),
"escalate": True,
}
result = qa_chain.invoke({"input": question})
return {
"answer": result["answer"],
"escalate": False,
"sources": result["context"],
}
Production Considerations
- •
Data residency
- •Keep embeddings, vector stores, logs, and LLM traffic inside approved regions.
- •If your bank has jurisdictional constraints, do not send policy content to unapproved SaaS endpoints.
- •
Auditability
- •Persist user identity, timestamp, retrieved chunks, model version, prompt version, and policy version.
- •Compliance teams will ask why an answer was produced; make that answer reconstructible.
- •
Guardrails
- •Add strict refusal behavior for anything resembling legal interpretation or transaction advice.
- •Route ambiguous questions to human compliance review instead of forcing an answer.
- •
Monitoring
- •Track retrieval hit rate, refusal rate, hallucination reports, latency, and top unanswered questions.
- •Alert on sudden spikes in “cannot find it in the policy,” which often means stale documents or broken ingestion.
Common Pitfalls
- •
Using raw chat completion without retrieval
- •This turns your agent into a confident guesser.
- •Fix it by grounding every response in retrieved documents through
create_retrieval_chainor an equivalent RAG flow.
- •
Ignoring document versioning
- •In investment banking, old policies are often superseded but still searchable.
- •Fix it by storing
version,effective_date, andstatus=active/retiredin metadata and filtering retrievers accordingly.
- •
No escalation path for ambiguous questions
- •If the agent answers borderline compliance questions directly, you create risk.
- •Fix it by classifying high-risk intents and sending them to Compliance or Legal with full context attached.
If you build this pattern correctly, you get something useful: fast internal answers with citations, controlled scope, and an audit trail that survives regulatory scrutiny. That is what makes a policy Q&A agent viable in investment banking instead of just another chatbot demo.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit