How to Integrate LangGraph for wealth management with LangSmith for RAG
Wealth management agents fail in predictable ways: they answer without grounding, they lose state across multi-step workflows, and nobody can trace why a recommendation was made. Combining LangGraph for wealth management with LangSmith gives you the control plane for agent orchestration plus the observability layer for retrieval-heavy workflows, which is what you need when the output can affect portfolio decisions, suitability checks, or client communications.
Prerequisites
- •Python 3.10+
- •A LangChain-compatible environment
- •
langgraph,langsmith,langchain-openai, and a vector store package such aschromadborfaiss-cpu - •API keys configured:
- •
OPENAI_API_KEY - •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=wealth-rag
- •
- •A retriever index containing wealth-management documents:
- •product sheets
- •risk policy docs
- •client communication templates
- •compliance guidance
- •Basic familiarity with:
- •LangGraph stateful graphs
- •RAG pipelines
- •LangSmith traces and datasets
Integration Steps
1) Set up LangSmith tracing first
LangSmith needs to see every retrieval and generation step. Turn tracing on before you build the graph so your workflow is observable from the first run.
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "wealth-rag"
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
os.environ["LANGSMITH_API_KEY"] = os.getenv("LANGSMITH_API_KEY")
If you want explicit client control, initialize the SDK directly:
from langsmith import Client
client = Client()
project = client.create_project(project_name="wealth-rag")
print(project)
2) Build the retriever used by the graph
For wealth management, retrieval should be narrow and auditable. Use chunked policy docs or product docs, then expose a retriever that returns source metadata for trace inspection.
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
docs = [
Document(
page_content="Model portfolios require client risk profile validation before recommendation.",
metadata={"source": "policy/risk-policy.md", "type": "policy"}
),
Document(
page_content="Balanced portfolio targets 60% equities, 35% fixed income, 5% cash.",
metadata={"source": "products/balanced-fund.md", "type": "product"}
),
]
vectorstore = Chroma.from_documents(
documents=docs,
embedding=OpenAIEmbeddings(model="text-embedding-3-small"),
collection_name="wealth_docs",
)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
3) Define a LangGraph workflow for RAG orchestration
Use LangGraph to separate retrieval, answer drafting, and compliance review. That structure matters in wealth workflows because you often need deterministic checks before returning anything to the user.
from typing import TypedDict, List
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
class WealthState(TypedDict):
question: str
context: List[str]
answer: str
def retrieve(state: WealthState):
docs = retriever.invoke(state["question"])
return {"context": [d.page_content for d in docs]}
def generate(state: WealthState):
prompt = f"""
You are a wealth management assistant.
Use only the context below.
Question: {state['question']}
Context:
{chr(10).join(state['context'])}
"""
response = llm.invoke(prompt)
return {"answer": response.content}
graph = StateGraph(WealthState)
graph.add_node("retrieve", retrieve)
graph.add_node("generate", generate)
graph.set_entry_point("retrieve")
graph.add_edge("retrieve", "generate")
graph.add_edge("generate", END)
app = graph.compile()
4) Wrap graph execution with LangSmith metadata
This is where the integration becomes useful. Add run metadata so every request can be filtered by client segment, advisor team, or workflow type inside LangSmith.
from langsmith.run_helpers import traceable
@traceable(name="wealth_rag_workflow", metadata={"system": "wealth-management"})
def answer_question(question: str):
result = app.invoke({"question": question, "context": [], "answer": ""})
return result["answer"]
result = answer_question("Can I recommend a balanced fund to a moderate-risk client?")
print(result)
If you need richer trace grouping per customer journey, pass tags and metadata through your application layer:
from langsmith import traceable
@traceable(run_type="chain", name="advisor_rag")
def advisor_rag(question: str, client_segment: str):
out = app.invoke({"question": question, "context": [], "answer": ""})
return {
"answer": out["answer"],
"segment": client_segment,
}
5) Add evaluation hooks in LangSmith for regression testing
Once the graph is live, you need repeatable tests. Use a LangSmith dataset with representative wealth questions and compare outputs across prompt or retriever changes.
from langsmith import Client
client = Client()
dataset = client.create_dataset(
dataset_name="wealth-rag-eval",
description="Advisor questions for RAG regression testing"
)
client.create_example(
inputs={"question": "What is the allocation of the balanced portfolio?"},
outputs={"expected": "60% equities, 35% fixed income, 5% cash"},
dataset_id=dataset.id,
)
Testing the Integration
Run one end-to-end query and confirm that:
- •LangGraph retrieves context from your index
- •The final answer is grounded in retrieved text
- •A trace appears in LangSmith under your project
response = answer_question("What are the risk controls before recommending a model portfolio?")
print(response)
Expected output:
Model portfolios require validation of the client's risk profile before any recommendation is made. The workflow should confirm suitability against policy guidance and document the rationale.
In LangSmith, you should see a trace named wealth_rag_workflow with child spans for retrieval and generation. If tracing is configured correctly, each run will include prompt inputs, retrieved documents, latency, and model output.
Real-World Use Cases
- •
Advisor copilot for suitability checks
- •Retrieve policy rules and product facts before drafting recommendations.
- •Trace every decision path in LangSmith for audit review.
- •
Client Q&A over approved knowledge
- •Answer questions about fees, allocations, tax wrappers, or fund constraints using only approved documents.
- •Use LangGraph to enforce retrieval-first behavior.
- •
Compliance-aware document drafting
- •Generate meeting notes or follow-up emails from approved templates and policy snippets.
- •Compare outputs across versions in LangSmith when prompts or documents change.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit