How to Integrate LangGraph for wealth management with LangSmith for startups
Combining LangGraph for wealth management with LangSmith gives you a clean way to build regulated AI workflows that are observable from day one. For startups, that matters because portfolio review, suitability checks, and client communication need both deterministic orchestration and traceability when something goes wrong.
Prerequisites
- •Python 3.10+
- •A virtual environment set up with
venv,uv, orpoetry - •Installed packages:
- •
langgraph - •
langchain - •
langsmith - •a model provider package such as
langchain-openai
- •
- •API keys configured:
- •
OPENAI_API_KEY - •
LANGSMITH_API_KEY
- •
- •LangSmith project name set, for example:
- •
LANGSMITH_PROJECT=wealth-management-starter
- •
- •Basic familiarity with:
- •LangGraph state graphs
- •LangChain chat models
- •Python typing and dataclasses
Integration Steps
- •Install the dependencies and configure tracing.
pip install langgraph langchain langsmith langchain-openai
export OPENAI_API_KEY="your-openai-key"
export LANGSMITH_API_KEY="your-langsmith-key"
export LANGSMITH_TRACING=true
export LANGSMITH_PROJECT="wealth-management-starter"
LangSmith will automatically capture traces when tracing is enabled through environment variables. That gives you request-level visibility without wiring custom logging everywhere.
- •Define the graph state and create the agent nodes.
For wealth management, keep the state explicit. You want inputs like risk tolerance, time horizon, portfolio allocation, and compliance notes to move through the graph in a predictable way.
from typing import TypedDict, List, Optional
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.graph import StateGraph, END
class WealthState(TypedDict):
client_goal: str
risk_profile: str
portfolio_summary: str
recommendation: Optional[str]
compliance_note: Optional[str]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def analyze_portfolio(state: WealthState) -> dict:
prompt = f"""
You are a wealth management assistant.
Client goal: {state["client_goal"]}
Risk profile: {state["risk_profile"]}
Portfolio summary: {state["portfolio_summary"]}
Return a concise recommendation.
"""
response = llm.invoke([HumanMessage(content=prompt)])
return {"recommendation": response.content}
def compliance_check(state: WealthState) -> dict:
prompt = f"""
Review this recommendation for suitability and compliance risk.
Recommendation: {state["recommendation"]}
Return one short compliance note.
"""
response = llm.invoke([HumanMessage(content=prompt)])
return {"compliance_note": response.content}
- •Build the LangGraph workflow.
This is where orchestration matters. The graph makes sure analysis happens before compliance review, which is exactly what you want in a startup building financial workflows.
workflow = StateGraph(WealthState)
workflow.add_node("analyze_portfolio", analyze_portfolio)
workflow.add_node("compliance_check", compliance_check)
workflow.set_entry_point("analyze_portfolio")
workflow.add_edge("analyze_portfolio", "compliance_check")
workflow.add_edge("compliance_check", END)
app = workflow.compile()
- •Connect LangGraph execution to LangSmith tracing.
If tracing is enabled via environment variables, every model call inside the graph is captured by LangSmith automatically. If you want explicit control over metadata like customer segment or environment, pass it through runnable config.
from langchain_core.runnables import RunnableConfig
config = RunnableConfig(
tags=["wealth-management", "startup-mvp"],
metadata={
"team": "advisory-platform",
"environment": "staging",
"product": "portfolio-review"
}
)
result = app.invoke(
{
"client_goal": "Retire in 20 years with moderate growth",
"risk_profile": "moderate",
"portfolio_summary": "60% equities, 30% bonds, 10% cash",
"recommendation": None,
"compliance_note": None,
},
config=config,
)
Those tags and metadata show up in LangSmith traces. In practice, that lets you filter runs by product area, compare prompt versions, and debug bad outputs without digging through app logs.
- •Add a feedback hook for startup-grade observability.
LangSmith is most useful when you pair traces with evaluation data. Store the run output and attach human review later if your advisory team wants to score recommendation quality.
from langsmith import Client
client = Client()
client.create_dataset(
dataset_name="wealth-recommendations-eval",
description="Evaluation set for portfolio recommendations"
)
# Example of logging run feedback after execution
run_feedback = {
"score": 0.9,
"comment": "Recommendation matched moderate-risk profile."
}
# In production you'd attach this to a run_id from LangSmith UI or API.
print(run_feedback)
Testing the Integration
Run the graph end-to-end and confirm that both the recommendation and compliance note are returned.
test_result = app.invoke(
{
"client_goal": "Generate income within 5 years",
"risk_profile": "low",
"portfolio_summary": "40% equities, 50% bonds, 10% cash",
"recommendation": None,
"compliance_note": None,
}
)
print("Recommendation:", test_result["recommendation"])
print("Compliance:", test_result["compliance_note"])
Expected output:
Recommendation: ...
Compliance: ...
In LangSmith, you should also see a trace for the graph run with child spans for each LLM call. If tracing is configured correctly, that’s your confirmation that orchestration and observability are wired together.
Real-World Use Cases
- •
Suitability review assistant
- •Ingest client goals and risk profiles.
- •Generate an investment recommendation.
- •Run a compliance gate before anything reaches an advisor dashboard.
- •
Advisor copilot with audit trails
- •Summarize portfolios.
- •Draft client-facing explanations.
- •Use LangSmith traces to audit prompt behavior across releases.
- •
Policy-aware onboarding flows
- •Collect KYC-style inputs.
- •Route low-confidence cases to human review.
- •Track failure modes in LangSmith so your startup can tighten prompts and rules fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit