How to Integrate LangGraph for wealth management with LangSmith for production AI
Combining LangGraph for wealth management with LangSmith gives you two things most production AI systems need: deterministic workflow control and observability. In wealth management, that means you can route client intents through compliant steps, track portfolio-analysis decisions, and inspect every agent action when something looks off.
The practical win is simple: LangGraph handles the orchestration of your advisory workflow, while LangSmith records traces, prompts, tool calls, and failures so you can debug and improve the system in production.
Prerequisites
- •Python 3.10+
- •A LangChain-compatible environment
- •Installed packages:
- •
langgraph - •
langsmith - •
langchain - •
langchain-openaior another chat model provider
- •
- •API keys configured:
- •
OPENAI_API_KEYor your model provider key - •
LANGCHAIN_API_KEY - •
LANGCHAIN_TRACING_V2=true - •
LANGCHAIN_PROJECT=wealth-management-agent
- •
- •A clear wealth management workflow:
- •intake
- •risk profiling
- •portfolio recommendation
- •compliance check
- •Access to a test environment before production rollout
Integration Steps
- •
Install the packages and configure tracing
Start by installing the dependencies and enabling LangSmith tracing through environment variables.
pip install langgraph langsmith langchain langchain-openai export OPENAI_API_KEY="your-openai-key" export LANGCHAIN_API_KEY="your-langsmith-key" export LANGCHAIN_TRACING_V2="true" export LANGCHAIN_PROJECT="wealth-management-agent"In production, set these in your secret manager or deployment platform, not in shell history.
- •
Build a LangGraph workflow for wealth management
Use LangGraph to define a simple state machine for client intake and recommendation. This example keeps the flow explicit, which is what you want in regulated workflows.
from typing import TypedDict, Annotated from langgraph.graph import StateGraph, START, END from langchain_openai import ChatOpenAI class WealthState(TypedDict): client_profile: dict risk_score: int recommendation: str llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) def assess_risk(state: WealthState): profile = state["client_profile"] age = profile.get("age", 0) income = profile.get("income", 0) # Example deterministic scoring logic score = 30 if age < 35: score += 20 if income > 150000: score += 15 return {"risk_score": min(score, 100)} def generate_recommendation(state: WealthState): prompt = f""" Client profile: {state['client_profile']} Risk score: {state['risk_score']} Return a concise portfolio recommendation suitable for wealth management. """ response = llm.invoke(prompt) return {"recommendation": response.content} graph = StateGraph(WealthState) graph.add_node("assess_risk", assess_risk) graph.add_node("generate_recommendation", generate_recommendation) graph.add_edge(START, "assess_risk") graph.add_edge("assess_risk", "generate_recommendation") graph.add_edge("generate_recommendation", END) app = graph.compile() - •
Add LangSmith tracing to capture every run
Once tracing is enabled with environment variables, LangChain/LangGraph calls are automatically recorded in LangSmith. If you want explicit control over metadata for audits, attach tags and run metadata at invocation time.
result = app.invoke( { "client_profile": { "name": "Amina", "age": 42, "income": 180000, "assets_under_management": 750000, }, "risk_score": 0, "recommendation": "", }, config={ "tags": ["wealth-management", "prod"], "metadata": { "tenant_id": "bank_001", "workflow_version": "v1.0", "case_id": "wm-2026-00091", }, }, ) print(result)Those tags and metadata show up in LangSmith traces, which makes it much easier to filter by tenant, case ID, or workflow version when reviewing production behavior.
- •
Instrument custom steps with LangSmith run context
For more granular observability, use LangSmith’s run context inside custom functions. This is useful when you have non-LangChain logic like compliance rules or external portfolio engines.
from langsmith.run_helpers import traceable @traceable(name="compliance_check") def compliance_check(recommendation: str) -> dict: blocked_terms = ["guaranteed returns", "risk-free"] violations = [term for term in blocked_terms if term in recommendation.lower()] return { "approved": len(violations) == 0, "violations": violations, } @traceable(name="finalize_advice") def finalize_advice(state): check = compliance_check(state["recommendation"]) if not check["approved"]: return {"recommendation": "Recommendation blocked by compliance review."} return {"recommendation": state["recommendation"]} - •
Wire the traced function into the graph
Add the compliance step into your LangGraph flow so both orchestration and observability stay connected end to end.
def compliance_node(state: WealthState): check = compliance_check(state["recommendation"]) if not check["approved"]: return {"recommendation": f"Blocked: {check['violations']}"} return {"recommendation": state["recommendation"]} graph = StateGraph(WealthState) graph.add_node("assess_risk", assess_risk) graph.add_node("generate_recommendation", generate_recommendation) graph.add_node("compliance_node", compliance_node) graph.add_edge(START, "assess_risk") graph.add_edge("assess_risk", "generate_recommendation") graph.add_edge("generate_recommendation", "compliance_node") graph.add_edge("compliance_node", END) app = graph.compile()
Testing the Integration
Run a single end-to-end invocation and confirm that both the workflow result and the trace appear in LangSmith.
test_input = {
"client_profile": {
"name": "Jordan",
"age": 31,
"income": 220000,
"assets_under_management": 1200000,
},
"risk_score": 0,
"recommendation": "",
}
output = app.invoke(
test_input,
config={
"tags": ["smoke-test"],
"metadata": {"case_id": "test-001"},
},
)
print(output)
Expected output:
{
'client_profile': {...},
'risk_score': 65,
'recommendation': '...portfolio recommendation text...'
}
In LangSmith, you should see:
- •one parent trace for the full LangGraph run
- •child traces for each node execution
- •model call traces for
ChatOpenAI.invoke(...) - •metadata such as
case_id=test-001
Real-World Use Cases
- •
Client onboarding assistant
Route new clients through intake questions, risk scoring, suitability checks, and document generation with a full audit trail in LangSmith.
- •
Portfolio review copilot
Build an agent that reviews holdings against policy constraints, explains changes to advisors, and logs every recommendation path for review.
- •
Compliance-aware advisory workflow
Use LangGraph to enforce step order and approval gates while using LangSmith to monitor prompt regressions, latency spikes, and policy violations across tenants.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit