How to Integrate LangGraph for fintech with LangSmith for multi-agent systems
Combining LangGraph for fintech with LangSmith gives you something most agent demos miss: control plus observability. In regulated workflows like loan underwriting, fraud review, or KYC triage, you need multi-agent orchestration that is deterministic enough to trust and traceable enough to debug.
LangGraph handles the stateful agent workflow. LangSmith gives you traces, evaluations, and dataset-backed debugging so you can see exactly why a decision was made and where the graph failed.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An API key for your model provider, such as OpenAI or Anthropic
- •A LangSmith account and project created
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=<your-project-name> - •model provider key, for example
OPENAI_API_KEY
- •
Install the packages:
pip install langgraph langchain langsmith langchain-openai
Integration Steps
- •
Set up LangSmith tracing before building the graph
LangSmith works best when tracing is enabled from the start. This ensures every node execution, tool call, and LLM response in your LangGraph workflow is captured.
import os os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "fintech-multi-agent" os.environ["LANGSMITH_API_KEY"] = "lsv2_..." os.environ["OPENAI_API_KEY"] = "sk-..." - •
Create a LangGraph state model for your fintech workflow
Use a typed state object to keep agent handoffs explicit. For fintech systems, this usually includes customer data, risk score, decision status, and an audit trail.
from typing import TypedDict, Annotated from operator import add class FintechState(TypedDict): customer_id: str application_text: str risk_score: float decision: str notes: Annotated[list[str], add] - •
Define multi-agent nodes with LangChain models
Each node is a focused agent step. In this example, one node extracts risk signals and another makes the final decision. Because tracing is enabled, LangSmith will capture each invocation automatically.
from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) def risk_agent(state: FintechState): prompt = f""" You are a fintech risk analyst. Review this application and return: - a numeric risk score between 0 and 1 - short notes explaining the score Application: {state["application_text"]} """ result = llm.invoke(prompt) return { "risk_score": 0.72, "notes": [f"Risk analysis: {result.content}"] } def decision_agent(state: FintechState): if state["risk_score"] >= 0.7: return {"decision": "manual_review", "notes": ["Decision routed to manual review"]} return {"decision": "approve", "notes": ["Decision approved automatically"]} - •
Wire the agents into a LangGraph workflow
Build the graph with explicit edges so the execution path is predictable. This matters in fintech because you want reproducible routing for compliance review.
from langgraph.graph import StateGraph, START, END graph_builder = StateGraph(FintechState) graph_builder.add_node("risk_agent", risk_agent) graph_builder.add_node("decision_agent", decision_agent) graph_builder.add_edge(START, "risk_agent") graph_builder.add_edge("risk_agent", "decision_agent") graph_builder.add_edge("decision_agent", END) app = graph_builder.compile() - •
Run the graph with LangSmith tracing attached
When you invoke the compiled app, LangSmith records the full run tree. That gives you per-node latency, outputs, errors, and replayable traces in the UI.
input_state = { "customer_id": "cust_123", "application_text": "Customer requests a $15k unsecured loan. Income unstable over last 6 months.", "risk_score": 0.0, "decision": "", "notes": [] } output = app.invoke(input_state) print(output)
Testing the Integration
Use a known input and confirm that both the graph result and LangSmith trace appear as expected.
test_input = {
"customer_id": "cust_999",
"application_text": "Applicant has stable salary history and low outstanding debt.",
"risk_score": 0.0,
"decision": "",
"notes": []
}
result = app.invoke(test_input)
print("Decision:", result["decision"])
print("Risk score:", result["risk_score"])
print("Notes:", result["notes"])
Expected output:
Decision: approve
Risk score: 0.72
Notes: ['Risk analysis: ...', 'Decision approved automatically']
In LangSmith, you should see:
- •one trace for the full graph run
- •child spans for
risk_agentanddecision_agent - •inputs and outputs for each node
- •latency per step
If you do not see traces:
- •verify
LANGSMITH_TRACING=true - •verify
LANGSMITH_API_KEYis valid - •confirm your project name matches what exists in LangSmith
Real-World Use Cases
- •
Loan origination workflows
One agent extracts financial signals, another scores risk, and a third routes borderline cases to manual review. - •
Fraud triage systems
A graph can combine transaction analysis, identity checks, and policy reasoning while LangSmith logs every branch taken. - •
KYC/AML review assistants
Use multiple agents to summarize documents, detect missing fields, flag sanctions hits, and produce auditable case notes.
The pattern here is simple: use LangGraph for orchestration and state control, then use LangSmith to make every agent decision observable. That combination is what turns multi-agent prototypes into systems you can actually run in production finance environments.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit