How to Integrate LangChain for investment banking with Slack for RAG
If you’re building an AI agent for investment banking, Slack is usually where the work happens and LangChain is where the retrieval and orchestration lives. Combining them lets bankers ask questions in Slack and get grounded answers from internal deal docs, research notes, CIMs, or policy files without leaving the channel.
Prerequisites
- •Python 3.10+
- •A Slack app created in your workspace
- •Slack bot token with
chat:write,channels:history, andgroups:historyif needed - •Slack signing secret if you plan to receive interactive events or slash commands
- •LangChain installed with your model provider
- •A vector store for RAG, such as FAISS, Pinecone, or Chroma
- •Document sources for investment banking content:
- •deal memos
- •pitch books
- •market research
- •compliance policies
- •internal FAQs
- •Environment variables set:
- •
SLACK_BOT_TOKEN - •
SLACK_APP_TOKENif using Socket Mode - •
OPENAI_API_KEYor equivalent model key
- •
Integration Steps
- •
Install the core packages
Start with Slack’s Python SDK and LangChain components for retrieval.
pip install slack-sdk slack-bolt langchain langchain-openai langchain-community faiss-cpu python-dotenv - •
Load banking documents into a vector store
For RAG, your agent needs a retriever over internal content. This example uses LangChain loaders plus FAISS.
import os from dotenv import load_dotenv from langchain_community.document_loaders import TextLoader, DirectoryLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS load_dotenv() loader = DirectoryLoader( "./banking_docs", glob="**/*.txt", loader_cls=TextLoader, show_progress=True, ) docs = loader.load() splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150) chunks = splitter.split_documents(docs) embeddings = OpenAIEmbeddings(model="text-embedding-3-small") vectorstore = FAISS.from_documents(chunks, embeddings) retriever = vectorstore.as_retriever(search_kwargs={"k": 4}) - •
Build the LangChain RAG chain
Use a chat model plus a retrieval chain so answers are grounded in your internal corpus.
from langchain_openai import ChatOpenAI from langchain.chains import RetrievalQA llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True, ) result = qa_chain.invoke({"query": "Summarize the latest guidance on debt covenant language."}) print(result["result"]) - •
Wire Slack events to the LangChain agent
The cleanest pattern is a Slack bot that listens for mentions or slash commands, sends the user text to the RAG chain, then posts the answer back.
import os from slack_bolt import App from slack_bolt.adapter.socket_mode import SocketModeHandler app = App(token=os.environ["SLACK_BOT_TOKEN"]) @app.event("app_mention") def handle_app_mention(event, say): user_text = event.get("text", "") query = user_text.replace("<@U12345678>", "").strip() response = qa_chain.invoke({"query": query}) answer = response["result"] say(text=f"*RAG Answer:*\n{answer}") if __name__ == "__main__": handler = SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]) handler.start() - •
Post results back into a thread with source citations
In banking workflows, you want traceability. Post the answer in-thread and include source filenames so users can verify where the answer came from.
@app.event("app_mention") def handle_app_mention(event, say): user_text = event.get("text", "") query = user_text.replace("<@U12345678>", "").strip() response = qa_chain.invoke({"query": query}) answer = response["result"] sources = response.get("source_documents", []) source_list = [] for doc in sources: source_name = doc.metadata.get("source", "unknown") source_list.append(f"- {source_name}") say( text=f"*Answer:*\n{answer}\n\n*Sources:*\n" + "\n".join(source_list), thread_ts=event["ts"], )
Testing the Integration
Run the bot locally, mention it in Slack, and verify that it returns a grounded response plus sources.
test_query = "What is our policy on EBITDA add-backs in management presentations?"
response = qa_chain.invoke({"query": test_query})
print("ANSWER:")
print(response["result"])
print("\nSOURCES:")
for doc in response.get("source_documents", []):
print(doc.metadata.get("source"))
Expected output:
ANSWER:
The policy allows EBITDA add-backs only when they are clearly documented, non-recurring, and approved by the deal lead...
SOURCES:
./banking_docs/compliance_policy.txt
./banking_docs/deal_memo_q3.txt
Real-World Use Cases
- •
Deal team Q&A in Slack
- •Analysts ask questions like “What’s the latest debt package precedent?” and get answers sourced from internal docs.
- •
Compliance-aware research assistant
- •The bot can retrieve policy language before someone shares client-facing materials or draft commentary.
- •
Internal knowledge search for bankers
- •Teams use one Slack command to search pitch books, CIMs, and prior deal notes without hunting through folders.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit