How to Integrate LangGraph for wealth management with LangSmith for AI agents

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-wealth-managementlangsmithai-agents

Combining LangGraph for wealth management with LangSmith gives you something useful in production: agent workflows that can reason over portfolio, suitability, and client-service tasks while still being observable, testable, and debuggable. In practice, this means you can trace every branch in a wealth-management agent, inspect failures, and tighten prompts or tool calls without guessing.

Prerequisites

  • Python 3.10+
  • A LangGraph project set up for your wealth-management agent workflow
  • A LangSmith account and API key
  • Environment variables configured:
    • LANGCHAIN_API_KEY
    • LANGCHAIN_TRACING_V2=true
    • LANGCHAIN_PROJECT=wealth-management-agent
  • Installed packages:
    • langgraph
    • langchain
    • langsmith
    • your LLM provider package, such as langchain-openai
  • Access to any internal tools your agent needs:
    • portfolio lookup
    • suitability checks
    • policy/compliance validation

Integration Steps

  1. Set up LangSmith tracing for the agent runtime.

LangSmith works best when tracing is enabled before your graph executes. For wealth management systems, that gives you a full audit trail across tool calls, routing decisions, and model outputs.

import os

os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "wealth-management-agent"
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY")

If you are running in a container or Kubernetes job, inject these as secrets instead of hardcoding them.

  1. Build your LangGraph workflow for the wealth-management use case.

This example uses a simple graph with a decision node and a tool node. In a real system, the tools would connect to portfolio data, CRM records, and compliance services.

from typing import TypedDict
from langgraph.graph import StateGraph, START, END

class AgentState(TypedDict):
    query: str
    risk_flag: bool
    response: str

def assess_risk(state: AgentState) -> AgentState:
    query = state["query"].lower()
    risk_flag = any(term in query for term in ["sell all", "margin", "leverage", "high risk"])
    return {**state, "risk_flag": risk_flag}

def generate_response(state: AgentState) -> AgentState:
    if state["risk_flag"]:
        response = "This request needs suitability review before execution."
    else:
        response = "Request accepted for further processing."
    return {**state, "response": response}

graph = StateGraph(AgentState)
graph.add_node("assess_risk", assess_risk)
graph.add_node("generate_response", generate_response)

graph.add_edge(START, "assess_risk")
graph.add_edge("assess_risk", "generate_response")
graph.add_edge("generate_response", END)

app = graph.compile()
  1. Add LangSmith tracing around graph execution.

LangGraph will emit traces when LangSmith is enabled through environment variables. If you want explicit run metadata for audits or segmentation by desk/client type, use LangSmith’s client directly.

from langsmith import Client

client = Client()

run = client.create_run(
    name="wealth-agent-check",
    run_type="chain",
    inputs={"query": "Should I sell all my tech holdings?"},
    project_name="wealth-management-agent",
)

result = app.invoke({"query": "Should I sell all my tech holdings?", "risk_flag": False, "response": ""})

client.update_run(
    run_id=run.id,
    outputs=result,
)
print(result)

This pattern is useful when you want to attach business metadata like advisor ID, client segment, or jurisdiction to a trace.

  1. Wrap model calls so they show up cleanly in traces.

If your graph uses an LLM node, build it with standard LangChain integrations. LangSmith will capture prompt inputs, model outputs, and token usage when tracing is enabled.

from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def llm_decision(state: AgentState) -> AgentState:
    messages = [
        SystemMessage(content="You are a wealth management assistant. Flag risky requests."),
        HumanMessage(content=state["query"]),
    ]
    output = llm.invoke(messages)
    return {**state, "response": output.content}

Use this when the routing logic is more complex than keyword matching. For example, suitability classification or policy-based explanations usually belong here.

  1. Add structured evaluation with LangSmith datasets.

Once the graph is running, create test cases for common wealth-management scenarios and evaluate them through LangSmith. That gives you regression tests for changes in prompts or graph structure.

from langsmith import Client

client = Client()

dataset = client.create_dataset(
    dataset_name="wealth-agent-regression",
    description="Wealth management agent test cases",
)

client.create_example(
    inputs={"query": "Can I move everything into crypto?"},
    outputs={"expected_risk_flag": True},
    dataset_id=dataset.id,
)

client.create_example(
    inputs={"query": "What is my current allocation?"},
    outputs={"expected_risk_flag": False},
    dataset_id=dataset.id,
)

Testing the Integration

Run a single invocation and confirm the trace appears in LangSmith under your project name.

result = app.invoke({
    "query": "Should I sell all my tech holdings?",
    "risk_flag": False,
    "response": ""
})

print(result["risk_flag"])
print(result["response"])

Expected output:

True
This request needs suitability review before execution.

Then check LangSmith:

  • Project: wealth-management-agent
  • Trace should include the graph run
  • You should see node-level execution for assess_risk and generate_response

Real-World Use Cases

  • Suitability triage
    • Route high-risk client requests into compliance review before any trade instruction is generated.
  • Advisor copilot workflows
    • Let an internal agent summarize portfolio drift, draft client notes, and log every step for review.
  • Regression testing for policy changes
    • Re-run archived cases after prompt or graph updates to catch behavior changes before deployment.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides