How to Integrate LangGraph for fintech with Kubernetes for AI agents

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-fintechkubernetesai-agents

Combining LangGraph for fintech with Kubernetes gives you a clean way to run stateful AI agents that can reason over financial workflows while staying deployable, observable, and scalable. The pattern is useful when you need agents that handle payment exceptions, KYC review, fraud triage, or portfolio ops without turning your app server into a long-running mess.

Prerequisites

  • Python 3.10+
  • A Kubernetes cluster:
    • local: kind, minikube, or k3d
    • remote: EKS, GKE, AKS
  • kubectl configured and pointing at your cluster
  • A container registry for pushing your agent image
  • Access to your fintech data sources or sandbox APIs
  • LangGraph installed in your Python environment
  • Kubernetes Python client installed

Install the Python packages:

pip install langgraph kubernetes pydantic

Integration Steps

  1. Define the agent workflow in LangGraph

    Start by modeling the fintech workflow as a graph with explicit states. For example, an inbound transaction review agent can route between validation, risk scoring, and escalation.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END

class FintechState(TypedDict):
    transaction_id: str
    amount: float
    risk_score: int
    decision: str

def validate_transaction(state: FintechState) -> FintechState:
    if state["amount"] <= 0:
        return {**state, "decision": "reject"}
    return state

def score_risk(state: FintechState) -> FintechState:
    score = 90 if state["amount"] > 10000 else 20
    return {**state, "risk_score": score}

def route_decision(state: FintechState) -> str:
    return "escalate" if state["risk_score"] >= 80 else "approve"

def approve(state: FintechState) -> FintechState:
    return {**state, "decision": "approve"}

def escalate(state: FintechState) -> FintechState:
    return {**state, "decision": "manual_review"}

graph = StateGraph(FintechState)
graph.add_node("validate", validate_transaction)
graph.add_node("score", score_risk)
graph.add_node("approve", approve)
graph.add_node("escalate", escalate)

graph.add_edge(START, "validate")
graph.add_edge("validate", "score")
graph.add_conditional_edges("score", route_decision, {
    "approve": "approve",
    "escalate": "escalate",
})
graph.add_edge("approve", END)
graph.add_edge("escalate", END)

app = graph.compile()
  1. Package the graph as a service

    In production, don’t run the graph only as a script. Wrap it in an API container so Kubernetes can schedule it and restart it cleanly.

from fastapi import FastAPI
from pydantic import BaseModel

api = FastAPI()

class TransactionRequest(BaseModel):
    transaction_id: str
    amount: float

@api.post("/run")
async def run_workflow(req: TransactionRequest):
    result = app.invoke({
        "transaction_id": req.transaction_id,
        "amount": req.amount,
        "risk_score": 0,
        "decision": ""
    })
    return result

This gives you a stable runtime boundary. Your graph stays focused on business logic while Kubernetes handles process management.

  1. Connect Kubernetes from inside the agent system

    Use the Kubernetes Python client when the workflow needs cluster-aware actions like launching a review job, checking pod health, or scaling a worker pool after high-risk volume spikes.

from kubernetes import client, config

def get_k8s_client():
    try:
        config.load_incluster_config()
    except Exception:
        config.load_kube_config()
    return client.CoreV1Api()

def list_agent_pods(namespace="agents"):
    v1 = get_k8s_client()
    pods = v1.list_namespaced_pod(namespace=namespace)
    return [pod.metadata.name for pod in pods.items]

def create_review_job(namespace="agents"):
    batch = client.BatchV1Api()
    job = client.V1Job(
        metadata=client.V1ObjectMeta(name="manual-review-job"),
        spec=client.V1JobSpec(
            template=client.V1PodTemplateSpec(
                metadata=client.V1ObjectMeta(labels={"app": "manual-review"}),
                spec=client.V1PodSpec(
                    restart_policy="Never",
                    containers=[
                        client.V1Container(
                            name="reviewer",
                            image="your-registry/manual-review:latest",
                            command=["python", "-m", "review_worker"],
                        )
                    ],
                ),
            )
        ),
    )
    return batch.create_namespaced_job(namespace=namespace, body=job)
  1. Trigger Kubernetes actions from LangGraph nodes

    This is where the integration becomes useful. A node can decide whether to call K8s based on transaction risk and then hand off work to a separate pod.

def escalate_to_kubernetes(state: FintechState) -> FintechState:
    pods = list_agent_pods(namespace="agents")
    if len(pods) < 3:
        create_review_job(namespace="agents")

    return {
        **state,
        "decision": f"manual_review_triggered_by_{len(pods)}_pods"
    }

Then wire that node into the graph instead of a plain escalation handler:

graph = StateGraph(FintechState)
graph.add_node("validate", validate_transaction)
graph.add_node("score", score_risk)
graph.add_node("k8s_escalate", escalate_to_kubernetes)

graph.add_edge(START, "validate")
graph.add_edge("validate", "score")
graph.add_conditional_edges("score", route_decision, {
    "approve": END,
    "escalate": "k8s_escalate",
})
graph.add_edge("k8s_escalate", END)

app = graph.compile()
  1. Deploy to Kubernetes with environment-based configuration

    Keep cluster access and runtime settings in env vars. That makes the same container work locally and in-cluster.

import os

NAMESPACE = os.getenv("POD_NAMESPACE", "agents")
SERVICE_PORT = int(os.getenv("SERVICE_PORT", "8000"))

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(api, host="0.0.0.0", port=SERVICE_PORT)

A minimal deployment manifest should mount service account permissions for pod/job access if your agent creates jobs or inspects workloads.

Testing the Integration

Run the API locally first:

uvicorn main:api --reload --port 8000

Then verify both the LangGraph workflow and the Kubernetes call path:

import requests

payload = {
    "transaction_id": "txn_10001",
    "amount": 25000.0
}

resp = requests.post("http://localhost:8000/run", json=payload)
print(resp.status_code)
print(resp.json())

Expected output:

200
{
  'transaction_id': 'txn_10001',
  'amount': 25000.0,
  'risk_score': 90,
  'decision': 'manual_review_triggered_by_2_pods'
}

If you want to test cluster access directly:

print(list_agent_pods(namespace="agents"))

Expected output:

['agent-api-7f9d8c6b7f-xk2m4', 'risk-worker-5c9f7d8d7c-q9z8p']

Real-World Use Cases

  • Fraud triage pipeline

    • LangGraph scores transactions and routes suspicious cases.
    • Kubernetes spins up short-lived review workers only when volume spikes.
  • KYC document processing

    • One branch extracts fields from documents.
    • Another branch triggers OCR or human review jobs in separate pods.
  • Claims or payment exception handling

    • The agent classifies exceptions and decides whether to auto-resolve.
    • High-risk cases create Kubernetes jobs for downstream reconciliation services.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides