How to Integrate LangGraph for fintech with Kubernetes for startups
Combining LangGraph for fintech with Kubernetes gives you a clean way to run agentic workflows that touch regulated data, while keeping execution isolated, scalable, and observable. The practical win is simple: your LangGraph graph handles decisioning and orchestration, and Kubernetes handles deployment, retries, and horizontal scaling for startup workloads that can’t afford brittle scripts.
Prerequisites
- •Python 3.10+
- •A Kubernetes cluster:
- •local:
kind,minikube, ordocker-desktop - •cloud: EKS, GKE, or AKS
- •local:
- •
kubectlconfigured against the cluster - •A container registry your cluster can pull from
- •A LangGraph-compatible Python project with:
- •
langgraph - •
langchain-core - •your model provider SDK if needed
- •
- •Kubernetes Python client:
- •
kubernetes
- •
- •Environment variables ready for secrets:
- •model API keys
- •database credentials
- •fintech service tokens
Install the Python packages:
pip install langgraph langchain-core kubernetes pydantic
Integration Steps
- •Build your LangGraph workflow around a fintech task
For startups, the first useful pattern is an agent that classifies a payment request, checks risk, and routes it to the right downstream service. LangGraph gives you a stateful graph instead of a single brittle chain.
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
class FintechState(TypedDict):
transaction_id: str
amount: float
country: str
risk_score: int
decision: Literal["approve", "review", "reject"]
def assess_risk(state: FintechState) -> FintechState:
score = 90 if state["amount"] > 10000 else 20
if state["country"] in {"NG", "PK", "UA"}:
score += 15
return {**state, "risk_score": score}
def route_decision(state: FintechState) -> str:
if state["risk_score"] >= 80:
return "reject"
if state["risk_score"] >= 40:
return "review"
return "approve"
def approve(state: FintechState) -> FintechState:
return {**state, "decision": "approve"}
def review(state: FintechState) -> FintechState:
return {**state, "decision": "review"}
def reject(state: FintechState) -> FintechState:
return {**state, "decision": "reject"}
graph = StateGraph(FintechState)
graph.add_node("assess_risk", assess_risk)
graph.add_node("approve", approve)
graph.add_node("review", review)
graph.add_node("reject", reject)
graph.set_entry_point("assess_risk")
graph.add_conditional_edges(
"assess_risk",
route_decision,
{
"approve": "approve",
"review": "review",
"reject": "reject",
},
)
graph.add_edge("approve", END)
graph.add_edge("review", END)
graph.add_edge("reject", END)
app = graph.compile()
- •Package the graph as an API service
Kubernetes needs something it can run as a container. Expose the graph through FastAPI or plain Python HTTP handling so your pods can serve requests consistently.
from fastapi import FastAPI
from pydantic import BaseModel
class TransactionRequest(BaseModel):
transaction_id: str
amount: float
country: str
api = FastAPI()
@api.post("/evaluate")
def evaluate(req: TransactionRequest):
result = app.invoke({
"transaction_id": req.transaction_id,
"amount": req.amount,
"country": req.country,
"risk_score": 0,
"decision": "review",
})
return result
- •Deploy the service to Kubernetes
Use the Kubernetes Python client when you want your startup platform to create jobs or manage workloads programmatically. This is useful when each tenant or batch needs isolated execution.
from kubernetes import client, config
config.load_kube_config()
apps_v1 = client.AppsV1Api()
deployment = client.V1Deployment(
metadata=client.V1ObjectMeta(name="fintech-langgraph"),
spec=client.V1DeploymentSpec(
replicas=2,
selector=client.V1LabelSelector(match_labels={"app": "fintech-langgraph"}),
template=client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={"app": "fintech-langgraph"}),
spec=client.V1PodSpec(
containers=[
client.V1Container(
name="api",
image="registry.example.com/fintech-langgraph:latest",
ports=[client.V1ContainerPort(container_port=8000)],
)
]
),
),
),
)
apps_v1.create_namespaced_deployment(namespace="default", body=deployment)
If you prefer job-based isolation for high-risk workflows, use BatchV1Api instead of a long-running deployment.
- •Add workflow execution from inside the cluster
Once the service is running in Kubernetes, call it from another pod or controller. This lets you chain agent decisions with internal fintech services like KYC checks or ledger writes.
import requests
payload = {
"transaction_id": "txn_123",
"amount": 12500,
"country": "US",
}
response = requests.post(
"http://fintech-langgraph.default.svc.cluster.local/evaluate",
json=payload,
timeout=10,
)
print(response.json())
- •Wire in config and secrets the right way
Do not bake keys into your image. Use Kubernetes secrets and environment variables so LangGraph nodes can call external fintech APIs safely.
import os
MODEL_API_KEY = os.environ["MODEL_API_KEY"]
LEDGER_API_URL = os.environ["LEDGER_API_URL"]
def write_ledger_entry(state):
headers = {"Authorization": f"Bearer {MODEL_API_KEY}"}
# call your internal ledger service here using requests/httpx
return state
Testing the Integration
Run the graph locally first, then verify the Kubernetes service responds from inside the cluster.
result = app.invoke({
"transaction_id": "txn_test_001",
"amount": 15000,
"country": "NG",
"risk_score": 0,
"decision": "review",
})
print(result)
Expected output:
{
'transaction_id': 'txn_test_001',
'amount': 15000,
'country': 'NG',
'risk_score': 105,
'decision': 'reject'
}
For cluster verification, port-forward the service and hit it with curl:
kubectl port-forward svc/fintech-langgraph 8000:8000
curl -X POST http://127.0.0.1:8000/evaluate \
-H 'Content-Type: application/json' \
-d '{"transaction_id":"txn_123","amount":500,"country":"US"}'
Real-World Use Cases
- •Fraud triage agents that score transactions, branch into manual review, and notify ops teams through Kubernetes-managed workers.
- •KYC onboarding flows where LangGraph routes applicants through document extraction, sanctions checks, and exception handling.
- •Claims intake systems for fintech-insurance products where each claim runs in an isolated pod with audit-friendly logs and retry policies.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit