How to Integrate LangGraph for fintech with LangSmith for RAG
Combining LangGraph for fintech with LangSmith gives you a production-grade RAG pipeline that is observable end to end. In practice, that means your agent can route customer queries, retrieve regulated knowledge, and keep a trace of every decision for debugging, audit, and evaluation.
Prerequisites
- •Python 3.10+
- •A LangGraph project set up for your fintech agent workflow
- •A LangSmith account and API key
- •Access to a vector store for your RAG corpus
- •Installed packages:
- •
langgraph - •
langchain - •
langsmith - •
langchain-openaior another model provider - •
faiss-cpu,chromadb, or your preferred retriever backend
- •
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=fintech-rag-agent - •model provider key such as
OPENAI_API_KEY
- •
Integration Steps
- •Set up LangSmith tracing for the whole agent runtime.
LangSmith works best when tracing is enabled at process start. This gives you request-level visibility into retrieval, tool calls, model outputs, and graph transitions.
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "fintech-rag-agent"
os.environ["LANGSMITH_API_KEY"] = os.getenv("LANGSMITH_API_KEY")
If you run this in a service container, set these in deployment config instead of hardcoding them.
- •Build the retrieval layer for your RAG pipeline.
For fintech use cases, keep retrieval deterministic and scoped. Use metadata filters for product line, jurisdiction, document type, and version so the agent doesn’t answer from stale policy docs.
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
docs = [
Document(
page_content="Mortgage refinancing requires a debt-to-income ratio below 43%.",
metadata={"product": "mortgage", "jurisdiction": "US", "version": "2024-01"}
),
Document(
page_content="Chargeback disputes must be filed within 60 days of the statement date.",
metadata={"product": "cards", "jurisdiction": "US", "version": "2024-01"}
),
]
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 2})
- •Define a LangGraph workflow that routes the query and retrieves context.
LangGraph is where you make the agent behavior explicit. For fintech, that usually means a state machine with steps like classify intent, retrieve policy context, generate answer, and optionally escalate to compliance review.
from typing import TypedDict, List
from langchain_core.messages import HumanMessage, AIMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
class AgentState(TypedDict):
question: str
context: List[str]
answer: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def retrieve(state: AgentState):
docs = retriever.invoke(state["question"])
return {"context": [d.page_content for d in docs]}
def generate(state: AgentState):
prompt = f"""
You are a fintech support agent.
Use only the context below.
Question: {state['question']}
Context:
{chr(10).join(state['context'])}
"""
response = llm.invoke([HumanMessage(content=prompt)])
return {"answer": response.content}
graph = StateGraph(AgentState)
graph.add_node("retrieve", retrieve)
graph.add_node("generate", generate)
graph.set_entry_point("retrieve")
graph.add_edge("retrieve", "generate")
graph.add_edge("generate", END)
app = graph.compile()
- •Attach LangSmith tracing metadata to each run.
This is where LangSmith becomes useful beyond simple logging. Add tags and metadata so you can slice traces by product line, customer segment, jurisdiction, or incident ID during debugging.
from langsmith import Client
client = Client()
result = app.invoke(
{"question": "What is the chargeback filing window?"},
config={
"run_name": "fintech_rag_query",
"tags": ["fintech", "rag", "cards"],
"metadata": {
"jurisdiction": "US",
"product": "cards",
"release": "2024-01"
}
}
)
print(result["answer"])
If you need deeper observability on custom nodes, pass the same config through node-level calls or wrap critical functions with traced run metadata in your service layer.
- •Add evaluation hooks in LangSmith for regression testing.
Once the graph works, use LangSmith datasets and evaluations to catch answer drift after prompt or retriever changes. This matters in fintech because policy answers must stay consistent across releases.
from langsmith.evaluation import evaluate
def predict(inputs: dict):
return app.invoke(
{"question": inputs["question"]},
config={"tags": ["eval", "fintech-rag"]}
)["answer"]
# Example dataset-style input list for local evaluation
examples = [
{"question": "How long do customers have to file a chargeback?"},
{"question": "What DTI ratio is required for refinancing?"},
]
for ex in examples:
print(predict(ex))
Testing the Integration
Run a real query through the graph and confirm both retrieval and generation complete under one traced run.
test_input = {"question": "What is the chargeback filing window?"}
output = app.invoke(
test_input,
config={
"run_name": "integration_test",
"tags": ["smoke-test"],
"metadata": {"test_case": "chargeback_window"}
}
)
print("Answer:", output["answer"])
print("Context:", output["context"])
Expected output:
Answer: Chargeback disputes must be filed within 60 days of the statement date.
Context: ['Chargeback disputes must be filed within 60 days of the statement date.']
In LangSmith, you should see:
- •one parent trace for
integration_test - •child spans for
retrieveandgenerate - •tags like
smoke-test - •metadata including
test_case=chargeback_window
Real-World Use Cases
- •Customer support agents that answer product-policy questions from approved internal docs while keeping full traceability for compliance review.
- •Fraud ops assistants that retrieve investigation playbooks and summarize next actions with auditable reasoning chains.
- •Loan servicing copilots that combine borrower-specific context with policy documents to produce consistent responses across channels.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit