How to Integrate LangGraph for fintech with LangSmith for production AI
Combining LangGraph for fintech with LangSmith gives you the piece most teams miss: deterministic agent orchestration plus production-grade observability. In practice, that means you can build regulated workflows like KYC review, fraud triage, claims intake, or loan pre-screening, then trace every decision, prompt, tool call, and failure in LangSmith.
Prerequisites
- •Python 3.10+
- •A LangGraph project installed:
- •
langgraph - •
langchain-core - •any model provider package you use, like
langchain-openai
- •
- •LangSmith account and project created
- •LangSmith API key exported as an environment variable
- •Basic familiarity with:
- •
StateGraph - •nodes, edges, and conditional routing
- •
@traceableor LangChain/LangGraph tracing hooks
- •
Install the packages:
pip install langgraph langchain-core langchain-openai langsmith
Set your environment variables:
export LANGSMITH_API_KEY="lsv2_..."
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="fintech-agent-prod"
export OPENAI_API_KEY="sk-..."
Integration Steps
- •Create a LangGraph workflow for your fintech use case
Start with a simple graph that routes a customer request through risk checks before producing a decision. The important part is that each node is explicit, because that makes tracing and debugging useful later.
from typing import TypedDict
from langgraph.graph import StateGraph, END
class FintechState(TypedDict):
customer_id: str
amount: float
risk_score: int
decision: str
def assess_risk(state: FintechState) -> FintechState:
amount = state["amount"]
state["risk_score"] = 90 if amount > 10000 else 20
return state
def approve_or_reject(state: FintechState) -> FintechState:
state["decision"] = "reject" if state["risk_score"] > 70 else "approve"
return state
graph = StateGraph(FintechState)
graph.add_node("assess_risk", assess_risk)
graph.add_node("approve_or_reject", approve_or_reject)
graph.set_entry_point("assess_risk")
graph.add_edge("assess_risk", "approve_or_reject")
graph.add_edge("approve_or_reject", END)
app = graph.compile()
- •Enable LangSmith tracing globally
LangGraph will emit traces when LangSmith tracing is enabled through environment variables. For production systems, this is the cleanest path because you do not want tracing logic scattered through business code.
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "fintech-agent-prod"
os.environ["LANGSMITH_API_KEY"] = "lsv2_your_key_here"
If you want explicit control in code, wrap the graph execution with LangSmith’s tracing context:
from langsmith import traceable
@traceable(name="fintech_workflow")
def run_workflow(customer_id: str, amount: float):
return app.invoke({
"customer_id": customer_id,
"amount": amount,
"risk_score": 0,
"decision": ""
})
- •Attach an LLM node and keep it traced end-to-end
Most real fintech flows use an LLM for classification, explanation generation, or document extraction. Use a chat model inside a LangGraph node so both the graph step and model call show up in LangSmith.
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def explain_decision(state: FintechState) -> FintechState:
prompt = [
SystemMessage(content="You are a compliance assistant for fintech operations."),
HumanMessage(content=f"Explain why this application was {state['decision']} with risk score {state['risk_score']}.")
]
response = llm.invoke(prompt)
state["decision"] = f"{state['decision']} | {response.content}"
return state
graph = StateGraph(FintechState)
graph.add_node("assess_risk", assess_risk)
graph.add_node("approve_or_reject", approve_or_reject)
graph.add_node("explain_decision", explain_decision)
graph.set_entry_point("assess_risk")
graph.add_edge("assess_risk", "approve_or_reject")
graph.add_edge("approve_or_reject", "explain_decision")
graph.add_edge("explain_decision", END)
app = graph.compile()
- •Pass metadata for tenant, case ID, and compliance context
This is where production systems differ from demos. Add metadata so every trace can be filtered by tenant, product line, case ID, or regulatory workflow.
result = app.invoke(
{
"customer_id": "cust_123",
"amount": 15000,
"risk_score": 0,
"decision": ""
},
config={
"metadata": {
"tenant": "banking-eu",
"case_id": "kyc-88921",
"workflow": "loan_pre_screen"
},
"tags": ["fintech", "production", "kyc"]
}
)
print(result)
LangSmith will store these fields on the run. That makes it much easier to debug why one customer got rejected while another passed under the same rules.
- •Add custom spans for business-critical events
For fintech agents, the graph trace alone is not enough. You usually want to mark events like “sanctions check started,” “manual review required,” or “policy override applied.”
from langsmith import traceable
@traceable(name="sanctions_screening")
def sanctions_check(customer_id: str) -> bool:
# Replace with real screening API call.
return customer_id != "blocked_user"
def route_with_compliance(state: FintechState) -> FintechState:
passed = sanctions_check(state["customer_id"])
if not passed:
state["decision"] = "reject | sanctions hit"
return state
Testing the Integration
Run the workflow and confirm two things:
- •the Python output is correct
- •the run appears in your LangSmith project with nested spans for the graph and model calls
if __name__ == "__main__":
output = app.invoke(
{
"customer_id": "cust_123",
"amount": 15000,
"risk_score": 0,
"decision": ""
},
config={
"metadata": {"case_id": "test-001"},
"tags": ["smoke-test"]
}
)
print(output)
Expected output:
{
'customer_id': 'cust_123',
'amount': 15000,
'risk_score': 90,
'decision': 'reject | ...'
}
In LangSmith, you should see:
- •one top-level run for the workflow
- •child runs for each LangGraph node
- •LLM spans for
ChatOpenAI.invoke - •metadata like
case_id=test-001and tagsmoke-test
Real-World Use Cases
- •
Fraud triage
- •Route transactions through risk scoring, transaction enrichment, and LLM-generated analyst summaries.
- •Use LangSmith traces to inspect false positives and tune prompts or thresholds.
- •
KYC / KYB document review
- •Extract entity details from uploaded documents.
- •Trace extraction quality per document type and compare prompt versions in LangSmith.
- •
Claims or loan decisioning
- •Build a multi-step agent that checks policy rules, validates evidence, drafts explanations, and escalates edge cases.
- •Use traces to prove why a decision was made when audit teams ask later.
The pattern is simple: keep business logic in LangGraph nodes and keep observability in LangSmith. That gives you a production AI system you can debug under pressure instead of guessing from logs after the fact.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit