How to Integrate LangGraph for pension funds with LangSmith for multi-agent systems

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-pension-fundslangsmithmulti-agent-systems

Combining LangGraph for pension funds with LangSmith gives you a practical way to build regulated multi-agent workflows and still see exactly what happened at each step. In pension operations, that matters because you need traceability for benefit calculations, document routing, compliance checks, and human review. The value is not just orchestration; it is observability across agents handling sensitive financial decisions.

Prerequisites

  • Python 3.10+
  • langgraph
  • langsmith
  • langchain-core
  • An OpenAI-compatible model provider or other chat model supported by LangChain
  • A LangSmith account and API key
  • Environment variables configured:
    • LANGSMITH_API_KEY
    • LANGSMITH_TRACING=true
    • LANGSMITH_PROJECT=pension-multi-agent
  • Basic familiarity with:
    • StateGraph from LangGraph
    • traceable / LangSmith tracing
    • Python async or sync function definitions

Integration Steps

  1. Install the packages and configure tracing.
pip install langgraph langsmith langchain-core langchain-openai
import os

os.environ["LANGSMITH_API_KEY"] = "lsv2_..."
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "pension-multi-agent"
  1. Define the shared state for your pension workflow.

For pension systems, keep state explicit. That makes audits easier when one agent checks eligibility and another drafts a response.

from typing import TypedDict, List, Optional

class PensionState(TypedDict):
    member_id: str
    request_type: str
    documents: List[str]
    eligibility_result: Optional[str]
    compliance_result: Optional[str]
    final_response: Optional[str]
  1. Build the LangGraph workflow with multiple agent nodes.

Here we create two agents: one for eligibility review and one for compliance review. In a real pension fund setup, these would call internal tools, policy engines, or RAG layers.

from langgraph.graph import StateGraph, END
from langchain_core.runnables import RunnableLambda

def eligibility_agent(state: PensionState) -> PensionState:
    docs = state["documents"]
    if "id_proof" in docs and "employment_history" in docs:
        state["eligibility_result"] = "Eligible based on submitted documents."
    else:
        state["eligibility_result"] = "Not eligible yet: missing required documents."
    return state

def compliance_agent(state: PensionState) -> PensionState:
    if "Eligible" in (state.get("eligibility_result") or ""):
        state["compliance_result"] = "Compliant for further processing."
    else:
        state["compliance_result"] = "Compliance blocked pending eligibility."
    return state

def response_agent(state: PensionState) -> PensionState:
    state["final_response"] = (
        f"Eligibility: {state['eligibility_result']} | "
        f"Compliance: {state['compliance_result']}"
    )
    return state

graph = StateGraph(PensionState)
graph.add_node("eligibility", RunnableLambda(eligibility_agent))
graph.add_node("compliance", RunnableLambda(compliance_agent))
graph.add_node("response", RunnableLambda(response_agent))

graph.set_entry_point("eligibility")
graph.add_edge("eligibility", "compliance")
graph.add_edge("compliance", "response")
graph.add_edge("response", END)

app = graph.compile()
  1. Add LangSmith tracing to the graph execution.

LangGraph emits runs that LangSmith can capture automatically when tracing is enabled. If you want finer control, wrap specific functions with @traceable.

from langsmith import traceable

@traceable(name="pension_request_router")
def run_pension_workflow(member_id: str, request_type: str, documents: list[str]):
    initial_state: PensionState = {
        "member_id": member_id,
        "request_type": request_type,
        "documents": documents,
        "eligibility_result": None,
        "compliance_result": None,
        "final_response": None,
    }
    return app.invoke(initial_state)

result = run_pension_workflow(
    member_id="M-100245",
    request_type="benefit_withdrawal",
    documents=["id_proof", "employment_history", "tax_form"]
)

print(result["final_response"])
  1. Connect sub-agents or tool calls to LangSmith traces for debugging.

This is where multi-agent systems become useful. If one agent calls a calculator or policy lookup tool, trace that tool separately so you can inspect latency and failure points in LangSmith.

from langchain_core.tools import tool

@tool
def calculate_estimated_benefit(months_of_service: int, salary: float) -> float:
    """Estimate pension benefit using a simple formula."""
    return round((months_of_service * salary) / 1000.0, 2)

@traceable(name="benefit_estimation_step")
def estimate_and_log():
    benefit = calculate_estimated_benefit.invoke({"months_of_service": 180, "salary": 42000})
    return {"estimated_benefit": benefit}

tool_result = estimate_and_log()
print(tool_result)

Testing the Integration

Run the workflow with valid pension documents and confirm both the graph output and the LangSmith trace appear in your project dashboard.

test_output = run_pension_workflow(
    member_id="M-998811",
    request_type="retirement_claim",
    documents=["id_proof", "employment_history"]
)

assert test_output["eligibility_result"] == "Eligible based on submitted documents."
assert test_output["compliance_result"] == "Compliant for further processing."
assert test_output["final_response"] is not None

print(test_output["final_response"])

Expected output:

Eligibility: Eligible based on submitted documents. | Compliance: Compliant for further processing.

In LangSmith, you should see:

  • One parent trace for pension_request_router
  • Child runs for each LangGraph node
  • Tool traces if you used any @tool functions
  • Inputs/outputs at each step for audit review

Real-World Use Cases

  • Pension claim triage

    • One agent validates submitted documents.
    • Another checks policy constraints.
    • A final agent drafts a response for human approval.
  • Retirement benefit estimation

    • One agent fetches member history.
    • Another calculates projected payout.
    • LangSmith captures every calculation for auditability.
  • Compliance-first document processing

    • Route cases through eligibility, AML-style checks, and exception handling.
    • Use LangSmith traces to investigate failures without digging through logs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides