How to Integrate LangGraph for retail banking with LangSmith for startups
Combining LangGraph for retail banking with LangSmith gives you two things most startup agent stacks lack: controlled workflow execution and real observability. For banking use cases, that matters because you need deterministic routing for sensitive actions, plus trace-level debugging when an agent misclassifies a customer request or hits a compliance branch.
Prerequisites
- •Python 3.10+
- •A virtual environment set up with
venv,poetry, oruv - •Access to a LangChain/LangGraph-compatible environment
- •A LangSmith account and API key
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=<your-project-name>
- •
- •Installed packages:
- •
langgraph - •
langchain-core - •
langsmith - •an LLM provider package such as
langchain-openai
- •
Install the dependencies:
pip install langgraph langchain-core langsmith langchain-openai
Integration Steps
- •
Set up LangSmith tracing first
LangSmith works best when tracing is enabled at process start. In startup environments, do this through environment variables so every graph run is captured without extra plumbing.
import os
os.environ["LANGSMITH_API_KEY"] = "lsv2_XXXXXXXXXXXXXXXX"
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "retail-banking-agent"
If you want explicit client control, instantiate the SDK too:
from langsmith import Client
client = Client()
print(client)
- •
Build a LangGraph workflow for a retail banking flow
A common pattern is: classify intent, route to the correct branch, then return a safe response. For retail banking, keep the graph small and auditable.
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
class BankingState(TypedDict):
message: str
intent: str
response: str
def classify_intent(state: BankingState) -> BankingState:
msg = state["message"].lower()
if "balance" in msg:
intent = "balance_check"
elif "card" in msg or "lost" in msg:
intent = "card_support"
else:
intent = "general_support"
return {**state, "intent": intent}
def balance_node(state: BankingState) -> BankingState:
return {**state, "response": "Your available balance is $4,250.18."}
def card_node(state: BankingState) -> BankingState:
return {**state, "response": "Your debit card has been frozen. A replacement will be issued."}
def general_node(state: BankingState) -> BankingState:
return {**state, "response": "I can help with balances, cards, transfers, and account questions."}
def route(state: BankingState) -> Literal["balance", "card", "general"]:
if state["intent"] == "balance_check":
return "balance"
if state["intent"] == "card_support":
return "card"
return "general"
graph = StateGraph(BankingState)
graph.add_node("classify", classify_intent)
graph.add_node("balance", balance_node)
graph.add_node("card", card_node)
graph.add_node("general", general_node)
graph.add_edge(START, "classify")
graph.add_conditional_edges("classify", route)
graph.add_edge("balance", END)
graph.add_edge("card", END)
graph.add_edge("general", END)
app = graph.compile()
- •
Attach LangSmith tracing to graph execution
When you invoke the compiled graph, LangSmith captures the run automatically if tracing is enabled. That gives you node-level visibility into classification mistakes and bad routing decisions.
result = app.invoke({"message": "What is my balance?", "intent": "", "response": ""})
print(result)
For more structured tracking in startup environments, wrap the invocation in a named trace so your runs are easier to filter in LangSmith.
from langsmith import traceable
@traceable(name="retail-banking-agent-run")
def run_banking_agent(message: str):
initial_state = {"message": message, "intent": "", "response": ""}
return app.invoke(initial_state)
output = run_banking_agent("I lost my debit card")
print(output["response"])
- •
Add LLM-powered classification with LangChain components
If you want better intent detection than keyword matching, use an LLM node inside LangGraph. This still traces cleanly in LangSmith because the underlying model calls are captured.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "Classify the customer request into one of: balance_check, card_support, general_support."),
("user", "{message}")
])
def llm_classify(state: BankingState) -> BankingState:
chain = prompt | llm
result = chain.invoke({"message": state["message"]})
intent_text = result.content.strip().lower()
if "balance_check" in intent_text:
intent = "balance_check"
elif "card_support" in intent_text:
intent = "card_support"
else:
intent = "general_support"
return {**state, "intent": intent}
Replace the earlier classifier node with this one if you need production-grade routing for customer support and servicing flows.
- •
Inspect traces and tighten guardrails
Once traffic flows through the graph, use LangSmith to inspect failed runs and reroute edge cases. In banking systems, this is where you catch missing branches before they hit production support queues.
from langsmith import Client
client = Client()
runs = client.list_runs(project_name="retail-banking-agent", limit=5)
for run in runs:
print(run.id, run.name, run.status)
Testing the Integration
Run a simple smoke test against both the graph and tracing path.
test_cases = [
{"message": "Show me my balance"},
{"message": "I lost my card"},
]
for case in test_cases:
result = app.invoke({"message": case["message"], "intent": "", "response": ""})
print(f"Input: {case['message']}")
print(f"Intent: {result['intent']}")
print(f"Response: {result['response']}")
print("---")
Expected output:
Input: Show me my balance
Intent: balance_check
Response: Your available balance is $4,250.18.
---
Input: I lost my card
Intent: card_support
Response: Your debit card has been frozen. A replacement will be issued.
---
If LangSmith is configured correctly, each invocation also appears as a traced run under your project name with node-level execution details.
Real-World Use Cases
- •
Retail banking support agents
- •Route customers into safe branches for balance checks, card freezes, fee explanations, and transfer status.
- •Use LangSmith traces to debug bad classifications and reduce escalations.
- •
Compliance-aware onboarding assistants
- •Build multi-step workflows for KYC collection, document checks, and risk flagging.
- •Trace every step so compliance teams can review exactly what happened on each path.
- •
Startup internal ops agents
- •Handle account servicing requests across product support and back-office automation.
- •Use LangGraph for deterministic flow control and LangSmith for monitoring regressions after prompt or model changes.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit