How to Integrate LangGraph for wealth management with LangSmith for multi-agent systems
Opening
If you're building a wealth management agent, you need more than a single chatbot loop. You need orchestration across portfolio analysis, risk checks, compliance review, and client communication, with traceability across every step.
That is where LangGraph and LangSmith fit together. LangGraph gives you the multi-agent workflow for wealth management tasks, and LangSmith gives you observability, tracing, and evaluation so you can debug and govern the system in production.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An LLM provider configured through LangChain, such as OpenAI
- •A LangSmith account and API key
- •Environment variables set:
- •
LANGCHAIN_TRACING_V2=true - •
LANGCHAIN_API_KEY=... - •
LANGCHAIN_PROJECT=wealth-management-agents - •
OPENAI_API_KEY=...
- •
Install the packages:
pip install langgraph langchain langsmith langchain-openai
Integration Steps
- •Set up LangSmith tracing first.
You want traces before you wire in your graph. That way every node execution, tool call, and model response is captured from the start.
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "ls__your_api_key"
os.environ["LANGCHAIN_PROJECT"] = "wealth-management-agents"
os.environ["OPENAI_API_KEY"] = "sk-your-openai-key"
- •Define your wealth management agents as graph nodes.
A practical wealth workflow usually has at least three responsibilities:
- •portfolio analysis
- •risk/compliance review
- •client response drafting
Use LangGraph's StateGraph to model that flow.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
class WealthState(TypedDict):
client_profile: str
portfolio_summary: str
risk_notes: str
recommendation: str
def analyze_portfolio(state: WealthState):
prompt = f"""
Review this client profile and summarize portfolio concerns:
{state['client_profile']}
"""
result = llm.invoke(prompt)
return {"portfolio_summary": result.content}
def compliance_review(state: WealthState):
prompt = f"""
Given this portfolio summary, identify suitability or compliance risks:
{state['portfolio_summary']}
"""
result = llm.invoke(prompt)
return {"risk_notes": result.content}
def draft_recommendation(state: WealthState):
prompt = f"""
Create a concise wealth management recommendation using:
Summary: {state['portfolio_summary']}
Risks: {state['risk_notes']}
"""
result = llm.invoke(prompt)
return {"recommendation": result.content}
- •Build the graph and compile it.
This is where the multi-agent system becomes an executable workflow. The output of one node becomes input to the next.
workflow = StateGraph(WealthState)
workflow.add_node("analyze_portfolio", analyze_portfolio)
workflow.add_node("compliance_review", compliance_review)
workflow.add_node("draft_recommendation", draft_recommendation)
workflow.set_entry_point("analyze_portfolio")
workflow.add_edge("analyze_portfolio", "compliance_review")
workflow.add_edge("compliance_review", "draft_recommendation")
workflow.add_edge("draft_recommendation", END)
graph = workflow.compile()
- •Add LangSmith tracing around graph execution.
LangGraph runs the workflow, but LangSmith should capture the full run for debugging and evaluation. The simplest integration is to invoke the compiled graph while tracing is enabled through environment variables.
For more explicit control over experiments or custom metadata, use LangSmith's traceable decorator.
from langsmith import traceable
@traceable(name="wealth-management-run")
def run_wealth_graph(client_profile: str):
return graph.invoke({
"client_profile": client_profile,
"portfolio_summary": "",
"risk_notes": "",
"recommendation": ""
})
result = run_wealth_graph(
"Client is 58 years old, nearing retirement, holds 70% equities, requests lower volatility."
)
print(result["recommendation"])
- •Attach metadata for multi-agent debugging and governance.
In real systems, you need to know which agent produced what output. Pass structured metadata into your trace so you can filter by client segment, workflow version, or advisor team.
from langsmith import Client
client = Client()
run_id = client.create_run(
name="wealth-advice-workflow",
run_type="chain",
inputs={"client_profile": "retirement-focused investor"},
project_name="wealth-management-agents",
)
print(run_id)
If you're using LangChain models inside LangGraph nodes, LangSmith automatically captures nested LLM calls when tracing is enabled. That gives you node-level visibility plus token usage and latency per step.
Testing the Integration
Run a simple end-to-end test with a realistic client profile. You should see a recommendation returned and traces appear in your LangSmith project.
test_input = {
"client_profile": (
"Client age 62, wants income generation, has $2M portfolio "
"with heavy tech exposure and moderate risk tolerance."
),
"portfolio_summary": "",
"risk_notes": "",
"recommendation": ""
}
output = graph.invoke(test_input)
print("Portfolio Summary:", output["portfolio_summary"])
print("Risk Notes:", output["risk_notes"])
print("Recommendation:", output["recommendation"])
Expected output:
Portfolio Summary: ...
Risk Notes: ...
Recommendation: ...
In LangSmith, you should see:
- •one top-level run for the graph invocation
- •child runs for each node
- •nested LLM calls inside each node
- •input/output payloads for every step
Real-World Use Cases
- •
Advisor copilot
- •Intake client goals
- •Analyze holdings against policy constraints
- •Draft compliant recommendations for human review
- •
Portfolio monitoring system
- •Run periodic market-sensitive checks across accounts
- •Route alerts through separate agents for risk and suitability review
- •Track every decision path in LangSmith for auditability
- •
Client servicing workflow
- •Handle retirement planning questions
- •Pull context from CRM-like state stored in the graph
- •Generate personalized responses with full trace history for QA
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit