How to Integrate LangGraph for pension funds with LangSmith for AI agents
Combining LangGraph for pension funds with LangSmith gives you a controlled way to build and observe AI agents that handle retirement workflows, member support, and compliance-heavy decision paths. LangGraph handles the stateful orchestration; LangSmith gives you tracing, debugging, and evaluation so you can see exactly how an agent reached a recommendation or escalated a case.
Prerequisites
- •Python 3.10+
- •A LangChain-compatible environment
- •Installed packages:
- •
langgraph - •
langsmith - •
langchain-openaior another chat model provider
- •
- •API keys configured:
- •
OPENAI_API_KEYor your model provider key - •
LANGSMITH_API_KEY
- •
- •LangSmith project created
- •Basic understanding of:
- •LangGraph state graphs
- •Runnable interfaces
- •Python async/sync execution
Install the dependencies:
pip install langgraph langsmith langchain-openai
Set environment variables:
export LANGSMITH_API_KEY="lsv2_..."
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="pension-fund-agent"
export OPENAI_API_KEY="sk-..."
Integration Steps
- •Define the agent state and enable tracing
For pension fund workflows, keep the state explicit. You want to track member profile data, request type, policy flags, and final outcome.
from typing import TypedDict, Optional
class PensionState(TypedDict):
member_id: str
request_type: str
query: str
policy_result: Optional[str]
response: Optional[str]
LangSmith tracing is usually enabled through environment variables, but you can also wire it in code through callbacks when needed.
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "pension-fund-agent"
- •Build the LangGraph workflow
Use StateGraph to define the control flow. In pension operations, this is where you encode routing rules like “benefits inquiry,” “withdrawal request,” or “escalate to human review.”
from langgraph.graph import StateGraph, END
def classify_request(state: PensionState) -> PensionState:
query = state["query"].lower()
if "withdraw" in query or "cash out" in query:
state["policy_result"] = "withdrawal_review_required"
elif "benefit" in query or "retirement" in query:
state["policy_result"] = "benefits_info"
else:
state["policy_result"] = "human_escalation"
return state
def generate_response(state: PensionState) -> PensionState:
if state["policy_result"] == "benefits_info":
state["response"] = (
f"Member {state['member_id']}: Your retirement benefits depend on "
"age, vesting status, and plan rules."
)
elif state["policy_result"] == "withdrawal_review_required":
state["response"] = (
f"Member {state['member_id']}: Withdrawal requests require compliance review."
)
else:
state["response"] = (
f"Member {state['member_id']}: This case has been escalated for human review."
)
return state
graph = StateGraph(PensionState)
graph.add_node("classify_request", classify_request)
graph.add_node("generate_response", generate_response)
graph.set_entry_point("classify_request")
graph.add_edge("classify_request", "generate_response")
graph.add_edge("generate_response", END)
app = graph.compile()
- •Add an LLM node and trace it with LangSmith
For production use, replace hardcoded logic with an LLM node that produces structured output. LangSmith will capture the prompt, response, latency, and errors.
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def llm_response(state: PensionState) -> PensionState:
messages = [
SystemMessage(content=(
"You are a pension fund assistant. "
"Answer conservatively and escalate ambiguous cases."
)),
HumanMessage(content=state["query"]),
]
result = llm.invoke(messages)
state["response"] = result.content
return state
graph = StateGraph(PensionState)
graph.add_node("classify_request", classify_request)
graph.add_node("llm_response", llm_response)
graph.set_entry_point("classify_request")
graph.add_conditional_edges(
"classify_request",
lambda s: "llm_response" if s["policy_result"] == "benefits_info" else END,
)
graph.add_edge("llm_response", END)
app = graph.compile()
If tracing is enabled via environment variables, LangSmith will automatically record the run tree for each invocation.
- •Attach custom metadata for auditability
Pension systems need audit trails. Use metadata to tag runs with member IDs, request categories, and compliance context.
result = app.invoke(
{
"member_id": "M-10293",
"request_type": "benefits",
"query": "When can I retire and what benefits do I qualify for?",
"policy_result": None,
"response": None,
},
config={
"metadata": {
"system": "pension-fund-agent",
"channel": "web",
"region": "EU",
"case_type": "benefits_inquiry",
}
},
)
print(result["response"])
That metadata shows up in LangSmith traces and helps you filter by region, product line, or escalation type.
- •Run evaluations on captured traces
Once the agent is live, use LangSmith datasets and evaluators to compare prompt versions or routing changes. This matters when policy text changes or you update benefit logic.
from langsmith import Client
client = Client()
dataset_name = "pension-benefits-checks"
# Example: create a dataset if it doesn't exist
try:
client.create_dataset(dataset_name)
except Exception:
pass
client.create_example(
inputs={"query": "Can I withdraw my pension early?"},
outputs={"expected_category": "withdrawal_review_required"},
dataset_name=dataset_name,
)
You can then run experiments against that dataset from your CI pipeline or staging environment.
Testing the Integration
Use a simple invocation to verify both orchestration and tracing are working.
test_input = {
"member_id": "M-77881",
"request_type": "retirement",
"query": "I want to know my retirement benefits at age 60.",
"policy_result": None,
"response": None,
}
result = app.invoke(
test_input,
config={
"metadata": {
"system": "pension-fund-agent",
"test_run": True,
}
},
)
print(result)
Expected output:
{
'member_id': 'M-77881',
'request_type': 'retirement',
'query': 'I want to know my retirement benefits at age 60.',
'policy_result': 'benefits_info',
'response': 'Member M-77881: Your retirement benefits depend on age, vesting status, and plan rules.'
}
In LangSmith, you should see a trace for the graph run with child spans for each node execution.
Real-World Use Cases
- •
Member service triage
- •Route pension questions into benefits lookup, contribution history, withdrawal review, or human escalation.
- •Use LangSmith traces to inspect failed classifications and prompt regressions.
- •
Compliance-aware document assistants
- •Build agents that summarize pension policy documents while logging every step for audit review.
- •Store evaluation results in LangSmith before promoting changes to production.
- •
Advisor support workflows
- •Create internal copilots that help advisors answer plan-specific questions without exposing unsafe actions.
- •Trace every recommendation path so compliance teams can review edge cases quickly.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit