How to Integrate LangGraph for banking with LangSmith for AI agents
Combining LangGraph for banking with LangSmith gives you two things most AI agent systems lack: deterministic control flow and real observability. In practice, that means you can build banking workflows that route, validate, and escalate safely while still tracing every model call, tool invocation, and failure path in LangSmith.
Prerequisites
- •Python 3.10+
- •A LangGraph-based banking workflow already defined, or at least a basic graph skeleton
- •A LangSmith account and API key
- •Installed packages:
- •
langgraph - •
langchain-core - •
langsmith - •your model provider SDK, for example
openaioranthropic
- •
- •Environment variables configured:
- •
LANGCHAIN_TRACING_V2=true - •
LANGCHAIN_API_KEY=... - •
LANGCHAIN_PROJECT=banking-agents - •
OPENAI_API_KEY=...or equivalent
- •
Integration Steps
- •
Install the packages and enable tracing
LangSmith hooks into LangChain-compatible runs through environment variables. For a banking agent system, set tracing first so every node execution is captured from the start.
pip install langgraph langchain-core langsmith openai export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY="lsv2_..." export LANGCHAIN_PROJECT="banking-agents" export OPENAI_API_KEY="sk-..." - •
Build a LangGraph workflow with explicit banking states
Keep the graph state narrow. For banking agents, I usually track intent, risk flags, account metadata, and final decision. That makes traces easier to inspect later in LangSmith.
from typing import TypedDict, Annotated from langgraph.graph import StateGraph, END from langchain_core.messages import HumanMessage, AIMessage class BankingState(TypedDict): messages: list intent: str risk_flag: bool decision: str def classify_intent(state: BankingState) -> BankingState: user_text = state["messages"][-1].content.lower() if "transfer" in user_text: state["intent"] = "money_transfer" elif "dispute" in user_text: state["intent"] = "card_dispute" else: state["intent"] = "general_support" return state def risk_check(state: BankingState) -> BankingState: text = state["messages"][-1].content.lower() state["risk_flag"] = any(term in text for term in ["urgent", "override", "bypass", "manual approval"]) return state def decide_route(state: BankingState) -> str: if state["risk_flag"]: return "escalate" return "handle" def handle_request(state: BankingState) -> BankingState: state["decision"] = f"Handled intent={state['intent']}" return state def escalate_request(state: BankingState) -> BankingState: state["decision"] = "Escalated to human review" return state graph = StateGraph(BankingState) graph.add_node("classify_intent", classify_intent) graph.add_node("risk_check", risk_check) graph.add_node("handle_request", handle_request) graph.add_node("escalate_request", escalate_request) graph.set_entry_point("classify_intent") graph.add_edge("classify_intent", "risk_check") graph.add_conditional_edges("risk_check", decide_route, { "handle": "handle_request", "escalate": "escalate_request", }) graph.add_edge("handle_request", END) graph.add_edge("escalate_request", END) app = graph.compile() - •
Attach LangSmith tracing to the run
If your nodes call LLMs through LangChain-compatible clients, LangSmith will trace them automatically when tracing is enabled. For direct visibility on custom nodes, wrap the run with a client project name and tags.
import os from langsmith import Client client = Client() inputs = { "messages": [HumanMessage(content="I need to transfer $5,000 today")], "intent": "", "risk_flag": False, "decision": "", } result = app.invoke( inputs, config={ "run_name": "banking-agent-flow", "tags": ["banking", "production"], "metadata": {"customer_tier": "retail"}, }, ) print(result["decision"]) print(client) - •
Add an LLM node and trace it end-to-end
This is where the integration becomes useful. The graph controls the workflow; LangSmith records the model call inside the workflow so you can inspect prompts, outputs, latency, and failures.
from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) def draft_response(state: BankingState) -> BankingState: prompt = f""" You are a banking assistant. Intent: {state['intent']} Risk flag: {state['risk_flag']} Decision so far: {state['decision']} Write a short customer-facing response. """ response = llm.invoke([HumanMessage(content=prompt)]) state["messages"].append(AIMessage(content=response.content)) return state graph2 = StateGraph(BankingState) graph2.add_node("classify_intent", classify_intent) graph2.add_node("risk_check", risk_check) graph2.add_node("draft_response", draft_response) graph2.set_entry_point("classify_intent") graph2.add_edge("classify_intent", "risk_check") graph2.add_edge("risk_check", "draft_response")
---
## Keep learning
- [The complete AI Agents Roadmap](/blog/ai-agents-roadmap-2026) — my full 8-step breakdown
- [Free: The AI Agent Starter Kit](/starter-kit) — PDF checklist + starter code
- [Work with me](/contact) — I build AI for banks and insurance companies
*By Cyprian Aarons, AI Consultant at [Topiax](https://topiax.xyz).*
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit