How to Integrate LangGraph for fintech with LangSmith for AI agents
Combining LangGraph for fintech with LangSmith gives you a clean way to build regulated AI agents that are both stateful and observable. In practice, that means you can route customer requests through controlled graph logic while tracing every decision, tool call, and model output for audit and debugging.
Prerequisites
- •Python 3.10+
- •
langgraphinstalled - •
langsmithinstalled - •A valid LangSmith API key
- •Access to your model provider key, such as OpenAI or Anthropic
- •A fintech use case defined, such as:
- •payment dispute handling
- •KYC document triage
- •loan application pre-screening
Install the packages:
pip install langgraph langsmith langchain-openai python-dotenv
Set your environment variables:
export LANGSMITH_API_KEY="ls__your_key"
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="fintech-agent"
export OPENAI_API_KEY="sk-your_key"
Integration Steps
- •Build a LangGraph workflow for the agent
Start with a simple state machine. In fintech, this is better than a single prompt because you can enforce routing rules, review steps, and fallback behavior.
from typing import TypedDict, Annotated
from operator import add
from langgraph.graph import StateGraph, START, END
class AgentState(TypedDict):
messages: Annotated[list, add]
risk_flag: str
decision: str
def classify_request(state: AgentState):
last_message = state["messages"][-1].lower()
if "chargeback" in last_message or "fraud" in last_message:
return {"risk_flag": "high", "decision": "route_to_review"}
return {"risk_flag": "low", "decision": "auto_process"}
graph = StateGraph(AgentState)
graph.add_node("classify_request", classify_request)
graph.add_edge(START, "classify_request")
graph.add_edge("classify_request", END)
app = graph.compile()
- •Add an LLM node and trace it with LangSmith
LangSmith traces are automatic when LANGSMITH_TRACING=true, but you should still structure your code so each node is explicit. That gives you clean spans per step in the graph.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def draft_response(state: AgentState):
prompt = state["messages"][-1]
result = llm.invoke(
[
{"role": "system", "content": "You are a fintech support agent. Be precise and compliant."},
{"role": "user", "content": prompt},
]
)
return {"messages": [result.content], "decision": "drafted"}
graph = StateGraph(AgentState)
graph.add_node("draft_response", draft_response)
graph.add_edge(START, "draft_response")
graph.add_edge("draft_response", END)
app = graph.compile()
- •Attach metadata so LangSmith can segment fintech runs
Use tags and metadata to separate production traffic by product line, region, or risk tier. This matters when compliance teams need to inspect only one slice of traffic.
config = {
"tags": ["fintech", "kyc", "production"],
"metadata": {
"team": "risk_ops",
"region": "eu-west-1",
"customer_tier": "enterprise",
},
}
result = app.invoke(
{"messages": ["A customer reports a suspicious card charge"], "risk_flag": "", "decision": ""},
config=config,
)
print(result)
- •Create custom LangSmith traces for non-LangGraph logic
If your agent calls internal services like account lookup or sanctions screening, wrap those operations in LangSmith spans. This gives you visibility outside the graph nodes.
from langsmith import traceable
@traceable(name="sanctions_screening")
def sanctions_screening(customer_name: str) -> dict:
# Replace with your real API call
if customer_name.lower() in ["john doe"]:
return {"match": True, "score": 0.98}
return {"match": False, "score": 0.02}
@traceable(name="kyc_pipeline")
def run_kyc(customer_name: str):
screening = sanctions_screening(customer_name)
if screening["match"]:
return {"status": "manual_review", "reason": "sanctions_hit"}
return {"status": "approved"}
print(run_kyc("John Doe"))
- •Wire the graph output into a traced end-to-end agent flow
This is the pattern you want in production: graph for control flow, LangSmith for observability.
from langsmith import Client
client = Client()
def fintech_agent(user_text: str):
state = {
"messages": [user_text],
"risk_flag": "",
"decision": "",
}
run_result = app.invoke(
state,
config={
"tags": ["fintech-agent"],
"metadata": {"use_case": "dispute_triage"},
},
)
client.create_feedback(
run_id=None,
key="agent_decision",
score=1,
comment=f"Decision was {run_result.get('decision')}",
)
return run_result
print(fintech_agent("Please help me dispute a card charge from yesterday"))
Testing the Integration
Run a simple smoke test that exercises the graph and confirms LangSmith tracing is active.
test_input = {
"messages": ["I need help with a possible fraud transaction"],
"risk_flag": "",
"decision": "",
}
output = app.invoke(
test_input,
config={
"tags": ["smoke-test"],
"metadata": {"env": "dev"},
},
)
print(output)
Expected output:
{
'messages': ['I need help with a possible fraud transaction'],
'risk_flag': 'high',
'decision': 'route_to_review'
}
In LangSmith, you should see:
- •one trace for the graph run
- •child spans for each node invocation
- •metadata and tags attached to the run
- •model calls captured automatically if tracing is enabled
Real-World Use Cases
- •
Fraud triage agents
Route suspicious transactions through review branches while logging every classification step in LangSmith. - •
KYC onboarding assistants
Use LangGraph to orchestrate document checks, PEP/sanctions screening, and human approval gates with full traceability. - •
Loan support copilots
Build an agent that answers applicant questions, checks eligibility rules, and escalates edge cases with auditable traces for compliance teams.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit