How to Integrate LangGraph for wealth management with Kubernetes for AI agents

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-wealth-managementkubernetesai-agents

Combining LangGraph for wealth management with Kubernetes gives you a clean way to run regulated AI workflows as stateful, observable, and scalable services. The useful pattern here is simple: LangGraph handles the decision flow for portfolio analysis, client servicing, or advisor copilots, while Kubernetes gives you deployment, scaling, and operational control for those agents.

Prerequisites

  • Python 3.10+
  • A Kubernetes cluster:
    • local: kind, minikube, or k3d
    • cloud: EKS, GKE, or AKS
  • kubectl configured against your cluster
  • A container registry you can push to
  • LangGraph installed:
    • pip install langgraph langchain-openai
  • Kubernetes Python client:
    • pip install kubernetes
  • Access to an LLM provider key in an environment variable like OPENAI_API_KEY
  • Basic knowledge of:
    • LangGraph state graphs
    • Kubernetes Deployments, Services, and ConfigMaps

Integration Steps

  1. Define the wealth-management workflow in LangGraph.

For wealth management, keep the graph small and auditable. A common flow is: ingest client profile, assess risk, generate recommendation, then produce an approval-ready summary.

from typing import TypedDict, List
from langgraph.graph import StateGraph, START, END

class WealthState(TypedDict):
    client_id: str
    holdings: List[str]
    risk_score: int
    recommendation: str

def assess_risk(state: WealthState) -> WealthState:
    holdings = state["holdings"]
    score = min(len(holdings) * 10 + 20, 100)
    return {**state, "risk_score": score}

def generate_recommendation(state: WealthState) -> WealthState:
    if state["risk_score"] >= 70:
        rec = "Recommend diversified rebalancing and tighter downside controls."
    else:
        rec = "Maintain current allocation with quarterly review."
    return {**state, "recommendation": rec}

graph = StateGraph(WealthState)
graph.add_node("assess_risk", assess_risk)
graph.add_node("generate_recommendation", generate_recommendation)

graph.add_edge(START, "assess_risk")
graph.add_edge("assess_risk", "generate_recommendation")
graph.add_edge("generate_recommendation", END)

app = graph.compile()
  1. Wrap the graph in a service that can run inside Kubernetes.

Expose the graph through FastAPI so Kubernetes can manage it like any other microservice. This is the layer where your AI agent becomes operationally deployable.

from fastapi import FastAPI
from pydantic import BaseModel

app_api = FastAPI()

class WealthRequest(BaseModel):
    client_id: str
    holdings: list[str]

@app_api.post("/wealth/analyze")
def analyze_wealth(req: WealthRequest):
    result = app.invoke({
        "client_id": req.client_id,
        "holdings": req.holdings,
        "risk_score": 0,
        "recommendation": ""
    })
    return result
  1. Load runtime configuration from Kubernetes ConfigMaps and Secrets.

Keep prompts, model names, and non-sensitive settings in ConfigMaps. Put API keys in Secrets and mount them as environment variables.

import os
from kubernetes import client, config

config.load_incluster_config()

v1 = client.CoreV1Api()

cm = v1.read_namespaced_config_map(
    name="wealth-agent-config",
    namespace="default"
)

api_key = os.getenv("OPENAI_API_KEY")
model_name = cm.data["MODEL_NAME"]

print(f"Loaded model={model_name}, api_key_present={bool(api_key)}")

A practical ConfigMap looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: wealth-agent-config
data:
  MODEL_NAME: gpt-4o-mini
  1. Deploy the agent service to Kubernetes.

Use a Deployment for replicas and a Service for internal access. If your workflow needs higher throughput later, scale replicas without changing the LangGraph code.

from kubernetes import client

deployment = client.V1Deployment(
    metadata=client.V1ObjectMeta(name="wealth-agent"),
    spec=client.V1DeploymentSpec(
        replicas=2,
        selector=client.V1LabelSelector(
            match_labels={"app": "wealth-agent"}
        ),
        template=client.V1PodTemplateSpec(
            metadata=client.V1ObjectMeta(labels={"app": "wealth-agent"}),
            spec=client.V1PodSpec(
                containers=[
                    client.V1Container(
                        name="agent",
                        image="ghcr.io/your-org/wealth-agent:latest",
                        ports=[client.V1ContainerPort(container_port=8000)],
                        env=[
                            client.V1EnvVar(
                                name="OPENAI_API_KEY",
                                value_from=client.V1EnvVarSource(
                                    secret_key_ref=client.V1SecretKeySelector(
                                        name="openai-secret",
                                        key="OPENAI_API_KEY"
                                    )
                                )
                            )
                        ]
                    )
                ]
            )
        )
    )
)

Then create a Service:

service = client.V1Service(
    metadata=client.V1ObjectMeta(name="wealth-agent-svc"),
    spec=client.V1ServiceSpec(
        selector={"app": "wealth-agent"},
        ports=[client.V1ServicePort(port=80, target_port=8000)]
    )
)
  1. Trigger graph execution from inside the cluster or from another agent.

A common pattern is one agent service calling another via HTTP or a queue consumer invoking the graph directly. If you need orchestration between multiple financial workflows, keep each graph isolated and let Kubernetes handle service discovery.

import requests

payload = {
    "client_id": "C-10491",
    "holdings": ["AAPL", "MSFT", "BND", "GLD"]
}

resp = requests.post(
    "http://wealth-agent-svc.default.svc.cluster.local/wealth/analyze",
    json=payload,
    timeout=30
)

print(resp.json())

Testing the Integration

Run a local invocation first to verify the LangGraph logic before pushing to Kubernetes.

result = app.invoke({
    "client_id": "C-10491",
    "holdings": ["AAPL", "MSFT", "BND", "GLD"],
    "risk_score": 0,
    "recommendation": ""
})

print(result)

Expected output:

{
  'client_id': 'C-10491',
  'holdings': ['AAPL', 'MSFT', 'BND', 'GLD'],
  'risk_score': 60,
  'recommendation': 'Maintain current allocation with quarterly review.'
}

After deployment, verify Kubernetes sees the pod and service:

kubectl get pods -l app=wealth-agent
kubectl get svc wealth-agent-svc

Real-World Use Cases

  • Client portfolio review agents that ingest holdings data, score risk exposure, and generate advisor-ready summaries.
  • Compliance triage agents that route suspicious recommendations through approval workflows before anything reaches production systems.
  • Multi-step advisor copilots that combine market data checks, suitability rules, and document generation across separate services running on Kubernetes.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides