How to Integrate LangGraph for retail banking with LangSmith for RAG
Combining LangGraph for retail banking with LangSmith gives you a clean way to build regulated RAG agents that can reason over customer context, branch policies, product docs, and transaction data without losing observability. LangGraph handles the stateful orchestration and control flow; LangSmith gives you tracing, evaluation, and debugging so you can see exactly how retrieval and generation behave in production.
Prerequisites
- •Python 3.10+
- •A LangChain-compatible environment
- •Access to your retail banking knowledge sources:
- •policy PDFs
- •product brochures
- •FAQ docs
- •internal support articles
- •API keys or credentials for:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •optional OpenAI or other LLM provider key
- •
- •Installed packages:
- •
langgraph - •
langsmith - •
langchain - •
langchain-openai - •a vector store package such as
faiss-cpuor your managed retriever
- •
- •A clear RAG scope for banking use cases:
- •account opening
- •card disputes
- •fee explanations
- •loan eligibility guidance
Integration Steps
- •
Set up LangSmith tracing first
If tracing is not enabled from the start, you lose the main reason to integrate LangSmith: visibility into every node in your graph.
import os os.environ["LANGSMITH_API_KEY"] = "lsv2_..." os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "retail-banking-rag" - •
Build the retriever for banking documents
For RAG, keep retrieval separate from orchestration. That makes it easier to swap vector stores later and evaluate retrieval quality in LangSmith.
from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain_text_splitters import RecursiveCharacterTextSplitter from langchain_core.documents import Document docs = [ Document(page_content="Savings accounts require KYC verification and proof of address."), Document(page_content="Card disputes must be raised within 60 days of the transaction date."), Document(page_content="Personal loan approvals depend on income verification and credit checks."), ] splitter = RecursiveCharacterTextSplitter(chunk_size=300, chunk_overlap=50) chunks = splitter.split_documents(docs) embeddings = OpenAIEmbeddings(model="text-embedding-3-small") vectorstore = FAISS.from_documents(chunks, embeddings) retriever = vectorstore.as_retriever(search_kwargs={"k": 2}) - •
Create a LangGraph workflow for the banking agent
Use a graph when you need branching logic: retrieve, answer, escalate, or ask follow-up questions. This is where LangGraph fits better than a single chain.
from typing import TypedDict, List from langgraph.graph import StateGraph, START, END from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage, AIMessage llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) class BankingState(TypedDict): question: str context: List[str] answer: str def retrieve_node(state: BankingState): docs = retriever.invoke(state["question"]) return {"context": [d.page_content for d in docs]} def answer_node(state: BankingState): prompt = f""" You are a retail banking assistant. Use only this context: {state['context']} Question: {state['question']} """ response = llm.invoke([HumanMessage(content=prompt)]) return {"answer": response.content} graph = StateGraph(BankingState) graph.add_node("retrieve", retrieve_node) graph.add_node("answer", answer_node) graph.add_edge(START, "retrieve") graph.add_edge("retrieve", "answer") graph.add_edge("answer", END) app = graph.compile() - •
Add LangSmith tracing around graph execution
The cleanest pattern is to run the compiled graph under a traced function. That gives you run-level visibility in LangSmith without changing your business logic.
from langsmith import traceable @traceable(name="retail_banking_rag_query") def run_banking_agent(question: str): result = app.invoke({"question": question, "context": [], "answer": ""}) return result["answer"] output = run_banking_agent("What documents do I need to open a savings account?") print(output) - •
Log evaluations in LangSmith for retrieval quality
Don’t stop at traces. For RAG systems in banking, measure whether retrieved context actually supports the answer.
from langsmith.evaluation import evaluate def target(inputs): return run_banking_agent(inputs["question"]) dataset_inputs = [ {"question": "How long do I have to dispute a card charge?"}, {"question": "What do I need for savings account opening?"}, ] results = evaluate( target, data=dataset_inputs, evaluators=[], experiment_prefix="banking-rag-eval", ) print(results)
Testing the Integration
Run one end-to-end query and confirm that both the answer and trace are produced.
result = app.invoke({
"question": "How long do I have to dispute a card transaction?",
"context": [],
"answer": ""
})
print("Answer:", result["answer"])
Expected output:
Answer: Card disputes must be raised within 60 days of the transaction date.
In LangSmith, you should also see:
- •one project named
retail-banking-rag - •a trace for
retail_banking_rag_query - •child spans for retrieval and generation steps
Real-World Use Cases
- •Retail support assistant
- •Answer fee questions, card replacement steps, branch hours, and dispute timelines using approved internal content.
- •Onboarding copilot
- •Guide customers through document requirements for savings accounts, current accounts, or personal loans.
- •Agent QA and compliance review
- •Use LangSmith traces to inspect hallucinations, weak retrievals, and responses that need escalation to human review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit