How to Integrate LangGraph for pension funds with LangSmith for startups
Combining LangGraph for pension funds with LangSmith gives you a practical setup for regulated agent workflows: deterministic orchestration on one side, observability and trace debugging on the other. For startups building pension-facing assistants, that means you can route sensitive retirement workflows through controlled graph states while still seeing every tool call, prompt, and failure in LangSmith.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •A LangSmith account and API key
- •Access to your model provider API key, such as OpenAI or Anthropic
- •A basic LangGraph workflow already defined for your pension use case
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=<your-project-name> - •model provider key like
OPENAI_API_KEY
- •
Integration Steps
- •Install the packages and set tracing config.
pip install langgraph langchain langsmith langchain-openai
export LANGSMITH_API_KEY="lsv2_..."
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="pension-fund-agent"
export OPENAI_API_KEY="sk-..."
LangSmith tracing is enabled through environment variables. That gives you run-level visibility without changing your graph logic.
- •Build a simple LangGraph workflow for a pension support task.
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
class PensionState(TypedDict):
question: str
answer: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def answer_pension_query(state: PensionState):
prompt = (
"You are a pension fund assistant. "
"Answer only with compliant, concise guidance.\n\n"
f"Question: {state['question']}"
)
response = llm.invoke(prompt)
return {"answer": response.content}
builder = StateGraph(PensionState)
builder.add_node("answer_pension_query", answer_pension_query)
builder.set_entry_point("answer_pension_query")
builder.add_edge("answer_pension_query", END)
graph = builder.compile()
This is the core LangGraph piece. You get explicit state transitions instead of an opaque agent loop, which matters when the workflow touches retirement data or member communications.
- •Add LangSmith tracing to the graph execution path.
from langsmith import Client
client = Client()
result = graph.invoke(
{"question": "Can a member transfer their pension if they are under review?"},
config={
"run_name": "pension-support-flow",
"tags": ["pension", "startup", "member-support"],
"metadata": {
"system": "pension-agent",
"team": "ops"
}
}
)
print(result["answer"])
The important part here is not just calling the graph. The config object gives LangSmith structured metadata so you can filter runs by flow, team, or regulatory context.
- •Trace substeps inside nodes when you need deeper visibility.
from langsmith import traceable
@traceable(name="compliance_answer_node")
def compliance_answer_node(state: PensionState):
prompt = (
"You are a compliance-safe pension assistant.\n"
"Do not speculate. If policy is unclear, say escalation is required.\n\n"
f"Question: {state['question']}"
)
response = llm.invoke(prompt)
return {"answer": response.content}
Use @traceable when a node has business meaning on its own. In practice, this helps separate graph-level runs from node-level traces so you can debug failures faster.
- •Wire the traced node into the LangGraph workflow and invoke it again.
builder = StateGraph(PensionState)
builder.add_node("compliance_answer_node", compliance_answer_node)
builder.set_entry_point("compliance_answer_node")
builder.add_edge("compliance_answer_node", END)
graph = builder.compile()
output = graph.invoke(
{"question": "What happens if a member requests early access?"},
config={
"run_name": "early-access-check",
"tags": ["early-access", "pension-compliance"]
}
)
print(output["answer"])
At this point, every execution shows up in LangSmith with the run name and tags you assigned. That makes it much easier to audit startup-facing pension workflows across staging and production.
Testing the Integration
Run a smoke test that executes the graph and confirms the trace appears in LangSmith.
test_input = {"question": "How do I check if I am eligible for drawdown?"}
result = graph.invoke(
test_input,
config={
"run_name": "smoke-test-drawdown",
"tags": ["smoke-test"]
}
)
assert "drawdown" in result["answer"].lower() or len(result["answer"]) > 0
print("Integration OK")
print(result["answer"])
Expected output:
Integration OK
You may be eligible for drawdown depending on your scheme rules...
If tracing is configured correctly, you should also see a new run in the LangSmith dashboard named smoke-test-drawdown.
Real-World Use Cases
- •
Member support agent with auditability
- •Route pension questions through a LangGraph decision tree.
- •Use LangSmith to inspect each answer path, prompt version, and failure mode.
- •
Compliance escalation workflow
- •Detect high-risk queries like early withdrawals or transfer disputes.
- •Send those branches to human review while keeping full trace history in LangSmith.
- •
Startup ops assistant for retirement products
- •Build internal tools that answer policy questions, summarize cases, and generate next-step actions.
- •Use LangSmith tags and metadata to separate product lines, teams, and environments.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit