How to Integrate LangGraph for pension funds with Kubernetes for AI agents

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-pension-fundskubernetesai-agents

Combining LangGraph for pension funds with Kubernetes gives you a clean way to run regulated AI workflows as durable, scalable services. The practical win is simple: LangGraph handles stateful agent orchestration, while Kubernetes handles deployment, scaling, and recovery when those agents are processing pension member queries, benefit calculations, or document review pipelines.

Prerequisites

  • Python 3.10+
  • A Kubernetes cluster:
    • local: kind, minikube, or docker-desktop
    • production: EKS, GKE, AKS, or on-prem
  • kubectl configured and pointing at your cluster
  • Access to a container registry for pushing images
  • LangGraph installed:
    • pip install langgraph langchain-openai
  • Kubernetes Python client installed:
    • pip install kubernetes
  • An LLM provider key if your graph calls a model
  • Basic familiarity with:
    • LangGraph StateGraph, START, END, and .compile()
    • Kubernetes Deployments, Services, and ConfigMaps

Integration Steps

  1. Define the pension-fund workflow in LangGraph

    Start with a small state machine that routes member requests through validation, policy lookup, and response generation. For pension funds, keep the graph explicit so every step is auditable.

    from typing import TypedDict
    from langgraph.graph import StateGraph, START, END
    
    class PensionState(TypedDict):
        member_id: str
        request_type: str
        policy_result: str
        response: str
    
    def validate_member(state: PensionState) -> PensionState:
        if not state["member_id"]:
            raise ValueError("member_id is required")
        return state
    
    def lookup_policy(state: PensionState) -> PensionState:
        # Replace with DB/API call in production
        state["policy_result"] = f"Policy found for {state['request_type']}"
        return state
    
    def generate_response(state: PensionState) -> PensionState:
        state["response"] = (
            f"Member {state['member_id']}: {state['policy_result']}"
        )
        return state
    
    graph = StateGraph(PensionState)
    graph.add_node("validate_member", validate_member)
    graph.add_node("lookup_policy", lookup_policy)
    graph.add_node("generate_response", generate_response)
    
    graph.add_edge(START, "validate_member")
    graph.add_edge("validate_member", "lookup_policy")
    graph.add_edge("lookup_policy", "generate_response")
    graph.add_edge("generate_response", END)
    
    app = graph.compile()
    
  2. Wrap the compiled LangGraph app in a service entrypoint

    Kubernetes runs containers, so expose the graph through a small Python service. This example uses FastAPI because it is straightforward to containerize and easy to probe.

    from fastapi import FastAPI
    from pydantic import BaseModel
    
    app_api = FastAPI()
    
    class PensionRequest(BaseModel):
        member_id: str
        request_type: str
    
    @app_api.post("/run")
    async def run_workflow(payload: PensionRequest):
        result = app.invoke({
            "member_id": payload.member_id,
            "request_type": payload.request_type,
            "policy_result": "",
            "response": ""
        })
        return {"response": result["response"]}
    
  3. Create Kubernetes objects from Python

    Use the Kubernetes Python client to generate the Deployment and Service programmatically. This keeps infra definitions close to your app code and makes it easier to template environment-specific values.

    from kubernetes import client, config
    
    config.load_kube_config()
    
    container = client.V1Container(
        name="pension-agent",
        image="registry.example.com/pension-agent:1.0.0",
        ports=[client.V1ContainerPort(container_port=8000)],
        env=[
            client.V1EnvVar(name="OPENAI_API_KEY", value_from=client.V1EnvVarSource(
                secret_key_ref=client.V1SecretKeySelector(name="openai-secret", key="api_key")
            ))
        ]
    )
    
    pod_spec = client.V1PodSpec(containers=[container])
    template = client.V1PodTemplateSpec(
        metadata=client.V1ObjectMeta(labels={"app": "pension-agent"}),
        spec=pod_spec
    )
    
    deployment_spec = client.V1DeploymentSpec(
        replicas=2,
        selector=client.V1LabelSelector(match_labels={"app": "pension-agent"}),
        template=template
    )
    
    deployment = client.V1Deployment(
        api_version="apps/v1",
        kind="Deployment",
        metadata=client.V1ObjectMeta(name="pension-agent"),
        spec=deployment_spec
    )
    
    apps_v1 = client.AppsV1Api()
     # create or replace in real code; simplified here
     apps_v1.create_namespaced_deployment(namespace="default", body=deployment)
    
  4. Expose the agent with a Service

    Your AI agent needs a stable endpoint for internal callers, cron jobs, or API gateways. A ClusterIP Service is enough if only internal systems call it.

    service = client.V1Service(
        api_version="v1",
        kind="Service",
        metadata=client.V1ObjectMeta(name="pension-agent"),
        spec=client.V1ServiceSpec(
            selector={"app": "pension-agent"},
            ports=[client.V1ServicePort(port=80, target_port=8000)],
            type="ClusterIP"
        )
    )
    
     core_v1 = client.CoreV1Api()
     core_v1.create_namespaced_service(namespace="default", body=service)
     print("Service created")
    
  5. Add an operational hook for health checks and scaling

    In production you want readiness probes and horizontal scaling based on load. Keep the LangGraph app stateless across requests unless you are persisting checkpoints externally.

     deployment.spec.template.spec.containers[0].readiness_probe = client.V1Probe(
         http_get=client.V1HTTPGetAction(path="/health", port=8000),
         initial_delay_seconds=5,
         period_seconds=10
     )
    
     deployment.spec.replicas = 3
     apps_v1.replace_namespaced_deployment(
         name="pension-agent",
         namespace="default",
         body=deployment
     )
     print("Deployment updated with probes and replicas")
    

Testing the Integration

Run the service locally or port-forward the Kubernetes Service, then send a request through the workflow endpoint.

import requests

resp = requests.post(
    "http://localhost:8000/run",
    json={"member_id": "M12345", "request_type": "benefit_statement"}
)

print(resp.status_code)
print(resp.json())

Expected output:

200
{'response': 'Member M12345: Policy found for benefit_statement'}

If you deployed into Kubernetes correctly, also verify the workload objects:

kubectl get deploy pension-agent
kubectl get svc pension-agent
kubectl logs deploy/pension-agent

Real-World Use Cases

  • Member self-service agent that answers pension balance and statement questions using LangGraph routing plus Kubernetes autoscaling.
  • Document triage pipeline that classifies contribution forms, validates missing fields, and routes exceptions to human review.
  • Compliance assistant that checks policy changes against fund rules and emits auditable action logs for downstream systems.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides