How to Integrate LangGraph for banking with Kubernetes for startups

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-bankingkubernetesstartups

Combining LangGraph for banking with Kubernetes gives you a clean way to run regulated AI workflows as isolated, observable services. The practical win is simple: you can route banking tasks like KYC checks, transaction review, and policy lookup through a stateful agent graph, then deploy that graph on Kubernetes with scaling, health checks, and rollout control.

For startups, this is the difference between a prototype and something you can actually put behind an API.

Prerequisites

  • Python 3.10+
  • A Kubernetes cluster
    • local: kind, minikube, or k3d
    • cloud: EKS, GKE, or AKS
  • kubectl configured and pointing at your cluster
  • Access to a bank-specific LLM provider or internal model endpoint
  • LangGraph installed
  • Kubernetes Python client installed
  • A container registry for pushing images

Install the Python packages:

pip install langgraph kubernetes pydantic fastapi uvicorn

Integration Steps

  1. Build the banking workflow as a LangGraph state machine.

Use LangGraph to model the steps in your banking agent: classify the request, fetch customer context, run policy checks, and return a decision. Keep the state explicit so it’s easy to audit later.

from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END

class BankingState(TypedDict):
    customer_id: str
    request_type: str
    risk_score: int
    decision: str

def classify_request(state: BankingState) -> BankingState:
    req = state["request_type"].lower()
    if "transfer" in req:
        state["risk_score"] = 7
    else:
        state["risk_score"] = 2
    return state

def approve_or_review(state: BankingState) -> BankingState:
    state["decision"] = "manual_review" if state["risk_score"] > 5 else "approve"
    return state

graph = StateGraph(BankingState)
graph.add_node("classify_request", classify_request)
graph.add_node("approve_or_review", approve_or_review)

graph.set_entry_point("classify_request")
graph.add_edge("classify_request", "approve_or_review")
graph.add_edge("approve_or_review", END)

banking_app = graph.compile()
  1. Wrap the graph in an API service that Kubernetes can run.

Expose the workflow through FastAPI so each pod can handle requests independently. This is the cleanest pattern for startups because it keeps the graph logic separate from infrastructure concerns.

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class BankRequest(BaseModel):
    customer_id: str
    request_type: str

@app.post("/run")
def run_workflow(payload: BankRequest):
    result = banking_app.invoke({
        "customer_id": payload.customer_id,
        "request_type": payload.request_type,
        "risk_score": 0,
        "decision": ""
    })
    return result

Run it locally first:

uvicorn app:app --host 0.0.0.0 --port 8000
  1. Containerize the service for Kubernetes deployment.

Kubernetes needs a repeatable image. Keep the container small and make sure your app starts from a single command.

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]

Create a deployment manifest with basic production settings:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: langgraph-banking-agent
spec:
  replicas: 2
  selector:
    matchLabels:
      app: langgraph-banking-agent
  template:
    metadata:
      labels:
        app: langgraph-banking-agent
    spec:
      containers:
      - name: api
        image: your-registry/langgraph-banking-agent:v1
        ports:
        - containerPort: 8000
        env:
        - name: PYTHONUNBUFFERED
          value: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: langgraph-banking-agent-svc
spec:
  selector:
    app: langgraph-banking-agent
  ports:
  - port: 80
    targetPort: 8000
  1. Deploy and manage the workload with the Kubernetes Python client.

If you want startup automation, use the Kubernetes client to create or update resources from Python instead of shelling out to kubectl. That makes CI/CD pipelines easier to control.

from kubernetes import client, config

config.load_kube_config()

apps_v1 = client.AppsV1Api()

deployment = client.V1Deployment(
    metadata=client.V1ObjectMeta(name="langgraph-banking-agent"),
    spec=client.V1DeploymentSpec(
        replicas=2,
        selector=client.V1LabelSelector(match_labels={"app": "langgraph-banking-agent"}),
        template=client.V1PodTemplateSpec(
            metadata=client.V1ObjectMeta(labels={"app": "langgraph-banking-agent"}),
            spec=client.V1PodSpec(containers=[
                client.V1Container(
                    name="api",
                    image="your-registry/langgraph-banking-agent:v1",
                    ports=[client.V1ContainerPort(container_port=8000)]
                )
            ])
        )
    )
)

apps_v1.create_namespaced_deployment(namespace="default", body=deployment)

You can also verify pods are running:

core_v1 = client.CoreV1Api()
pods = core_v1.list_namespaced_pod(namespace="default", label_selector="app=langgraph-banking-agent")
for pod in pods.items:
    print(pod.metadata.name, pod.status.phase)
  1. Connect runtime routing to Kubernetes service discovery.

In a real system, one service may call another by cluster DNS name. Use the Kubernetes Service name as the stable endpoint for your agent gateway or orchestration layer.

import requests

response = requests.post(
    "http://langgraph-banking-agent-svc.default.svc.cluster.local/run",
    json={
        "customer_id": "cust_123",
        "request_type": "large_transfer"
    },
    timeout=10,
)

print(response.json())

Testing the Integration

Hit the API locally or through the Kubernetes service and confirm you get a deterministic decision back.

import requests

resp = requests.post(
    "http://localhost:8000/run",
    json={"customer_id": "cust_123", "request_type": "large_transfer"},
)

print(resp.status_code)
print(resp.json())

Expected output:

200
{
  "customer_id": "cust_123",
  "request_type": "large_transfer",
  "risk_score": 7,
  "decision": "manual_review"
}

If you deploy into Kubernetes, test pod health too:

kubectl get pods -l app=langgraph-banking-agent
kubectl logs deploy/langgraph-banking-agent
kubectl get svc langgraph-banking-agent-svc

Real-World Use Cases

  • KYC triage agent

    • LangGraph routes customer onboarding cases through document checks, sanctions screening, and escalation logic.
    • Kubernetes keeps the service available during traffic spikes from signup campaigns.
  • Transaction review assistant

    • The graph classifies suspicious transfers and decides whether to auto-approve or send to compliance.
    • Kubernetes lets you scale replicas during end-of-month processing windows.
  • Policy-aware support agent

    • The workflow answers account questions while checking product rules and jurisdiction constraints.
    • You can roll out new policy versions safely using standard Kubernetes deployments and rollbacks.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides