How to Integrate LangGraph for banking with LangSmith for startups
If you’re building banking agents for a startup, you need two things working together: deterministic orchestration and observability. LangGraph gives you the graph-based control flow for regulated workflows, and LangSmith gives you tracing, debugging, and evaluation so you can see exactly why an agent approved, escalated, or rejected a request.
This combo is useful for flows like KYC intake, loan pre-screening, transaction dispute triage, and compliance review. You get a system that is auditable enough for banking and fast enough for a startup team to ship.
Prerequisites
- •Python 3.10+
- •A virtual environment set up
- •Installed packages:
- •
langgraph - •
langchain-core - •
langchain-openaior another chat model provider - •
langsmith
- •
- •API keys configured:
- •
OPENAI_API_KEYor your model provider key - •
LANGSMITH_API_KEY
- •
- •LangSmith project created
- •Basic familiarity with:
- •LangGraph
StateGraph - •LangChain runnable interfaces
- •Environment variables
- •LangGraph
Install the packages:
pip install langgraph langchain-core langchain-openai langsmith
Set your environment variables:
export OPENAI_API_KEY="your-openai-key"
export LANGSMITH_API_KEY="your-langsmith-key"
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="banking-agent-starter"
Integration Steps
- •
Define the banking workflow state
Start with a typed state object that carries customer data through the graph. For banking use cases, keep the state explicit so every node has a predictable contract.
from typing import TypedDict, Optional
class BankingState(TypedDict):
customer_id: str
request_type: str
risk_score: Optional[int]
decision: Optional[str]
notes: Optional[str]
- •
Create graph nodes with LangGraph
Build small nodes for intake, risk scoring, and decisioning. This is where LangGraph’s
StateGraphfits well because each step is isolated and testable.
from langgraph.graph import StateGraph, END
def intake_node(state: BankingState) -> BankingState:
return {
**state,
"notes": f"Received {state['request_type']} request for {state['customer_id']}"
}
def risk_node(state: BankingState) -> BankingState:
score = 82 if state["request_type"] == "loan_application" else 25
return {**state, "risk_score": score}
def decision_node(state: BankingState) -> BankingState:
decision = "manual_review" if state["risk_score"] and state["risk_score"] > 70 else "auto_approve"
return {**state, "decision": decision}
workflow = StateGraph(BankingState)
workflow.add_node("intake", intake_node)
workflow.add_node("risk", risk_node)
workflow.add_node("decision", decision_node)
workflow.set_entry_point("intake")
workflow.add_edge("intake", "risk")
workflow.add_edge("risk", "decision")
workflow.add_edge("decision", END)
app = workflow.compile()
- •
Attach LangSmith tracing to the graph run
LangSmith traces usually work through environment variables plus runnable instrumentation. If you want explicit control, wrap your nodes or run metadata through LangChain-compatible callbacks.
from langchain_core.runnables import RunnableLambda
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def explain_decision(state: BankingState) -> BankingState:
prompt = (
f"Customer {state['customer_id']} request {state['request_type']} "
f"has risk score {state['risk_score']}. Explain the decision in one sentence."
)
msg = llm.invoke(prompt)
return {**state, "notes": msg.content}
# Replace or extend a node with a RunnableLambda if you want cleaner tracing boundaries.
explain_runnable = RunnableLambda(explain_decision)
- •
Run the graph with traceable metadata
Pass input through the compiled app and include metadata that helps you filter traces in LangSmith later. For startup teams, this matters when multiple tenants or environments are running in parallel.
result = app.invoke(
{
"customer_id": "cust_10021",
"request_type": "loan_application",
"risk_score": None,
"decision": None,
"notes": None,
},
config={
"run_name": "banking-loan-precheck",
"tags": ["banking", "startup", "kyc"],
"metadata": {
"tenant_id": "startup-a",
"environment": "staging",
"flow": "loan_precheck"
}
}
)
print(result)
- •
Log evaluation data to LangSmith
Once the flow works, record inputs and outputs for later review. Use LangSmith datasets when you want to compare versions of your graph or measure how often manual review is triggered.
from langsmith import Client
client = Client()
dataset = client.create_dataset(
dataset_name="banking-loan-precheck-evals",
description="Loan precheck examples for graph regression testing"
)
client.create_example(
inputs={
"customer_id": "cust_10021",
"request_type": "loan_application",
"risk_score": None,
"decision": None,
"notes": None,
},
outputs={
"decision": result["decision"],
"risk_score": result["risk_score"]
},
dataset_id=dataset.id
)
Testing the Integration
Use a simple invocation and confirm both graph execution and trace capture are working.
test_input = {
"customer_id": "cust_90001",
"request_type": "account_opening",
"risk_score": None,
"decision": None,
"notes": None,
}
output = app.invoke(
test_input,
config={
"run_name": "smoke-test-banking-agent",
"tags": ["smoke-test"],
"metadata": {"environment": "local"}
}
)
print("Decision:", output["decision"])
print("Risk score:", output["risk_score"])
print("Notes:", output["notes"])
Expected output:
Decision: auto_approve
Risk score: 25
Notes: Received account_opening request for cust_90001
In LangSmith, you should see a trace named smoke-test-banking-agent with each node execution visible as part of the run tree.
Real-World Use Cases
- •
Loan pre-screening
- •Use LangGraph to route applicants through credit checks, policy checks, and escalation paths.
- •Use LangSmith to inspect why borderline cases were sent to manual review.
- •
KYC onboarding
- •Build a graph that validates identity documents, sanctions screening, and address verification.
- •Track failure points in LangSmith to reduce onboarding drop-off.
- •
Dispute triage
- •Route card disputes by amount, merchant type, fraud signals, and customer history.
- •Use traces to compare how changes in prompts or rules affect approval rates.
The pattern is simple: keep banking logic deterministic in LangGraph, then use LangSmith to make every step observable. That’s how startups ship agentic banking systems without losing control of what the system did and why it did it.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit