How to Integrate LangGraph for fintech with LangSmith for startups
Combining LangGraph for fintech with LangSmith gives you a practical setup for building and observing regulated AI agents. You get graph-based control flow for things like KYC checks, fraud triage, and payment escalation, while LangSmith gives you tracing, debugging, and evals across every step.
For startups, that matters because agent systems fail in the seams: bad routing, hidden tool errors, and prompt regressions. This integration lets you ship an agent that is both deterministic enough for finance workflows and observable enough to debug in production.
Prerequisites
- •Python 3.10+
- •A LangGraph project installed:
- •
langgraph - •
langchain-core
- •
- •LangSmith installed:
- •
langsmith
- •
- •An LLM provider configured, such as OpenAI or Anthropic
- •Environment variables set:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=fintech-agent-dev - •Your model key, for example
OPENAI_API_KEY
- •
- •Basic familiarity with:
- •stateful graphs
- •tool calling
- •Python async or sync execution
Integration Steps
- •Install the packages and enable tracing
LangSmith tracing is mostly environment-driven. Set it before you run your graph so every node execution is captured.
import os
os.environ["LANGSMITH_API_KEY"] = "lsv2-your-key"
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "fintech-agent-dev"
# Optional but useful when debugging locally
os.environ["LANGCHAIN_VERBOSE"] = "true"
Then install the dependencies:
pip install langgraph langchain-core langsmith openai
- •Define a graph state for a fintech workflow
For startup fintech use cases, keep state explicit. That makes retries, audits, and traces much easier to reason about.
from typing import TypedDict, Annotated
from operator import add
class FintechState(TypedDict):
customer_id: str
transaction_id: str
amount: float
risk_score: float
decision: str
notes: Annotated[list[str], add]
This state will move through fraud scoring, policy checks, and final approval.
- •Build the LangGraph workflow with traced nodes
LangGraph nodes are just Python callables. When tracing is enabled, LangSmith captures each node run automatically as long as you invoke the graph in a traced environment.
from langgraph.graph import StateGraph, END
def score_risk(state: FintechState):
amount = state["amount"]
risk_score = 0.9 if amount > 10000 else 0.2
return {
"risk_score": risk_score,
"notes": [f"Risk scored at {risk_score} for amount {amount}"]
}
def make_decision(state: FintechState):
if state["risk_score"] >= 0.8:
decision = "manual_review"
else:
decision = "approve"
return {
"decision": decision,
"notes": [f"Decision set to {decision}"]
}
graph = StateGraph(FintechState)
graph.add_node("score_risk", score_risk)
graph.add_node("make_decision", make_decision)
graph.set_entry_point("score_risk")
graph.add_edge("score_risk", "make_decision")
graph.add_edge("make_decision", END)
app = graph.compile()
- •Add a LangSmith-traced LLM call inside a node
If your fintech agent uses an LLM for explanation generation or policy summarization, wrap that call with LangChain/LangSmith-compatible primitives. The trace will show the model call inside the graph run.
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a fintech risk analyst."),
("human", "Explain why transaction {transaction_id} was flagged with risk score {risk_score}.")
])
def explain_decision(state: FintechState):
chain = prompt | llm
response = chain.invoke({
"transaction_id": state["transaction_id"],
"risk_score": state["risk_score"]
})
return {
"notes": [response.content]
}
graph2 = StateGraph(FintechState)
graph2.add_node("score_risk", score_risk)
graph2.add_node("explain_decision", explain_decision)
graph2.add_node("make_decision", make_decision)
graph2.set_entry_point("score_risk")
graph2.add_edge("score_risk", "explain_decision")
graph2.add_edge("explain_decision", "make_decision")
graph2.add_edge("make_decision", END)
app2 = graph2.compile()
- •Run the graph with a LangSmith project attached
Invoke the compiled app normally. With tracing enabled, LangSmith records the full execution tree under your project name.
result = app2.invoke({
"customer_id": "cust_123",
"transaction_id": "txn_987",
"amount": 25000,
"risk_score": 0.0,
"decision": "",
"notes": []
})
print(result)
If you want explicit run metadata in LangSmith, pass tags and metadata through config:
result = app2.invoke(
{
"customer_id": "cust_123",
"transaction_id": "txn_987",
"amount": 25000,
"risk_score": 0.0,
"decision": "",
"notes": []
},
config={
"tags": ["fintech", "fraud-triage"],
"metadata": {
"tenant": "startup-alpha",
"workflow": "payment_review"
}
}
)
Testing the Integration
Use a small transaction first and confirm both the graph output and the trace in LangSmith.
test_result = app2.invoke({
"customer_id": "cust_test",
"transaction_id": "txn_test_001",
"amount": 15000,
"risk_score": 0.0,
"decision": "",
"notes": []
})
print("Decision:", test_result["decision"])
print("Risk score:", test_result["risk_score"])
print("Notes:", test_result["notes"])
Expected output:
Decision: manual_review
Risk score: 0.9
Notes: ['Risk scored at 0.9 for amount 15000', 'Explain why transaction txn_test_001 was flagged with risk score 0.9.', 'Decision set to manual_review']
In LangSmith, you should see:
- •one top-level trace for the graph invocation
- •child runs for each node
- •the LLM call nested under its node
- •tags and metadata attached to the run
Real-World Use Cases
- •Fraud triage agents that route suspicious payments to manual review while logging every decision path for audit.
- •KYC onboarding assistants that collect missing documents, summarize risk signals, and trace every branch of the workflow.
- •Claims or disputes agents that classify cases, draft responses, and keep an inspection trail for compliance teams.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit