How to Integrate LangGraph for lending with LangSmith for startups
Combining LangGraph for lending with LangSmith gives you a practical setup for building loan decisioning agents that are observable, testable, and easier to debug in production. LangGraph handles the stateful workflow for intake, verification, risk checks, and decisioning, while LangSmith gives you traces, evaluations, and prompt-level visibility when something goes wrong.
Prerequisites
- •Python 3.10+
- •A LangChain/LangGraph-compatible environment
- •Access to a lending workflow built with
langgraph - •A LangSmith account and API key
- •Environment variables set:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=<your-project-name>
- •
- •Installed packages:
- •
langgraph - •
langchain-core - •
langsmith
- •
Integration Steps
- •
Install the SDKs and configure environment variables
Start by installing the packages and enabling tracing.
pip install langgraph langchain-core langsmith export LANGSMITH_API_KEY="lsv2_..." export LANGSMITH_TRACING="true" export LANGSMITH_PROJECT="startup-lending-agent"If you deploy in Docker or Kubernetes, set those values in your secret store and runtime environment instead of shell exports.
- •
Build a lending graph with stateful nodes
In lending flows, you usually want separate nodes for intake, document checks, affordability scoring, and final decisioning. LangGraph is a good fit because it keeps state across steps without forcing you into a single prompt chain.
from typing import TypedDict, Optional from langgraph.graph import StateGraph, START, END class LendingState(TypedDict): applicant_name: str income: float debt: float credit_score: int risk_band: Optional[str] decision: Optional[str] def intake_node(state: LendingState): return state def risk_node(state: LendingState): dti = state["debt"] / max(state["income"], 1) if state["credit_score"] >= 740 and dti < 0.35: band = "low" elif state["credit_score"] >= 680 and dti < 0.45: band = "medium" else: band = "high" return {"risk_band": band} def decision_node(state: LendingState): if state["risk_band"] == "low": return {"decision": "approve"} if state["risk_band"] == "medium": return {"decision": "manual_review"} return {"decision": "decline"} graph = StateGraph(LendingState) graph.add_node("intake", intake_node) graph.add_node("risk", risk_node) graph.add_node("decision", decision_node) graph.add_edge(START, "intake") graph.add_edge("intake", "risk") graph.add_edge("risk", "decision") graph.add_edge("decision", END) app = graph.compile() - •
Enable LangSmith tracing for the graph execution
LangSmith works best when tracing is enabled at runtime. For LangGraph-based systems built on top of LangChain primitives, the simplest path is to set the environment variables above and run the compiled graph normally.
import os os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "startup-lending-agent" os.environ["LANGSMITH_API_KEY"] = "lsv2_your_key_here" result = app.invoke( { "applicant_name": "Maya Patel", "income": 120000, "debt": 28000, "credit_score": 712, "risk_band": None, "decision": None, } ) print(result) - •
Add explicit trace metadata for startup-grade observability
For production debugging, add metadata like applicant segment, product type, or experiment ID. This makes it much easier to filter traces in LangSmith when one flow starts underperforming.
result = app.invoke( { "applicant_name": "Maya Patel", "income": 120000, "debt": 28000, "credit_score": 712, "risk_band": None, "decision": None, }, config={ "run_name": "lending-underwrite-v1", "tags": ["startup", "personal-loan", "underwriting"], "metadata": { "product": "personal_loan", "region": "us", "experiment_id": "exp-042", }, }, ) - •
Log custom evaluations to compare policy changes
Once traces are flowing into LangSmith, use evaluations to compare changes in your lending policy logic. That matters when product wants faster approvals but risk wants stricter thresholds.
from langsmith import Client client = Client() client.create_dataset( dataset_name="lending-policy-cases", description="Representative lending applications for regression testing", ) client.create_example( inputs={ "applicant_name": "Maya Patel", "income": 120000, "debt": 28000, "credit_score": 712, }, outputs={"expected_decision": "manual_review"}, dataset_name="lending-policy-cases", )
Testing the Integration
Run a sample application through the compiled graph and confirm that both the workflow result and trace appear in LangSmith.
test_input = {
"applicant_name": "Jordan Lee",
"income": 95000,
"debt": 15000,
# high enough for approval given low DTI
# this should land in low risk
# and return approve
#
# keep fields aligned with LendingState
#
# no external API calls needed here
#
# deterministic test case
#
# ready for CI smoke tests
#
# trace should show node-by-node execution
#
# important for regression testing
}
test_input["credit_score"] = 760
test_input["risk_band"] = None
test_input["decision"] = None
result = app.invoke(
test_input,
config={"run_name": "smoke-test-lending-flow"},
)
print(result)
Expected output:
{
'applicant_name': 'Jordan Lee',
'income': 95000,
'debt': 15000,
'credit_score': 760,
'risk_band': 'low',
'decision': 'approve'
}
In LangSmith, you should see a trace named smoke-test-lending-flow with each node execution recorded in order.
Real-World Use Cases
- •
Loan prequalification agent
- •Collects applicant data, runs affordability checks, and returns approve/manual review/decline with full traceability.
- •
Policy regression testing
- •Uses LangSmith datasets to replay historical applications after changing underwriting rules in LangGraph.
- •
Human-in-the-loop review
- •Routes medium-risk cases into an ops queue while preserving every step of the decision path for audit and QA.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit