How to Integrate LangGraph for lending with LangSmith for AI agents
Combining LangGraph for lending with LangSmith gives you a clean way to build loan decisioning agents that are both stateful and observable. You get the graph-based control flow you need for underwriting, document checks, and exception handling, plus trace-level visibility into every model call, tool call, and branch.
For lending workflows, that matters because auditability is not optional. You need to know why an application was routed to manual review, which retrieval step failed, and what the agent saw before it made a recommendation.
Prerequisites
- •Python 3.10+
- •A LangGraph lending project set up with your graph definition
- •A LangSmith account and API key
- •Installed packages:
- •
langgraph - •
langchain - •
langsmith - •your model provider SDK, such as
openaioranthropic
- •
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=loan-agent-prod
- •
- •A lending workflow already defined with nodes like:
- •intake
- •document extraction
- •affordability check
- •risk scoring
- •decision
Integration Steps
- •
Install the dependencies
Start by installing the core packages and your LLM provider.
pip install langgraph langchain langsmith openaiSet your environment variables before running the app.
export LANGSMITH_API_KEY="lsv2_..." export LANGSMITH_TRACING="true" export LANGSMITH_PROJECT="loan-agent-prod" - •
Create a traced LLM client
LangSmith traces LangChain/LangGraph runs when tracing is enabled. If you use a chat model inside graph nodes, wrap it normally and LangSmith will capture the calls.
import os from langchain_openai import ChatOpenAI os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "loan-agent-prod" llm = ChatOpenAI( model="gpt-4o-mini", temperature=0, api_key=os.environ["OPENAI_API_KEY"], )If you want explicit run metadata for lending audits, pass tags and metadata into your node calls later.
- •
Build a LangGraph lending workflow with traceable nodes
Define your state and graph. Each node can call the LLM or business logic, and LangSmith will trace the execution path.
from typing import TypedDict, List from langgraph.graph import StateGraph, START, END class LoanState(TypedDict): applicant_name: str income: float debt: float documents: List[str] risk_score: float decision: str def intake_node(state: LoanState): return state def risk_node(state: LoanState): dti = state["debt"] / state["income"] score = max(0.0, 1.0 - dti) return {"risk_score": score} def decision_node(state: LoanState): if state["risk_score"] >= 0.7: return {"decision": "approve"} if state["risk_score"] >= 0.5: return {"decision": "manual_review"} return {"decision": "decline"} builder = StateGraph(LoanState) builder.add_node("intake", intake_node) builder.add_node("risk", risk_node) builder.add_node("decision", decision_node) builder.add_edge(START, "intake") builder.add_edge("intake", "risk") builder.add_edge("risk", "decision") builder.add_edge("decision", END) graph = builder.compile() - •
Invoke the graph with LangSmith tracing metadata
This is where the integration becomes useful in production. You run the lending workflow as usual, but now every execution appears in LangSmith with tags and metadata.
result = graph.invoke( { "applicant_name": "Amina Yusuf", "income": 85000, "debt": 22000, "documents": ["id.pdf", "payslip.pdf"], "risk_score": 0.0, "decision": "", }, config={ "tags": ["lending", "pre_approval"], "metadata": { "customer_segment": "retail", "product": "personal_loan", "region": "ke", }, }, ) print(result) - •
Add explicit LangSmith runs for custom business steps
For non-LLM steps like policy checks or document validation, use the LangSmith client directly so those operations show up in traces too.
from langsmith import Client client = Client() with client.trace( name="loan_policy_check", run_type="tool", inputs={"income": 85000, "debt": 22000}, ) as run: dti = 22000 / 85000 approved_for_review = dti < 0.45 run.outputs = {"dti": dti, "approved_for_review": approved_for_review}
Testing the Integration
Run a simple end-to-end invocation and confirm you get both a final loan decision and a trace in LangSmith.
test_input = {
"applicant_name": "John Mwangi",
"income": 120000,
"debt": 30000,
"documents": ["national_id.pdf", "bank_statement.pdf"],
"risk_score": 0.0,
"decision": "",
}
output = graph.invoke(
test_input,
config={
"tags": ["integration-test"],
"metadata": {"test_case_id": "loan_graph_001"},
},
)
print(output["decision"])
print(output["risk_score"])
Expected output:
approve
0.75
In LangSmith, you should see:
- •one top-level graph run
- •child runs for each node
- •metadata for product/region/test case
- •timing for each step
Real-World Use Cases
- •Loan pre-qualification agents
- •Collect applicant data, validate documents, calculate affordability, and route borderline cases to manual review.
- •Credit policy enforcement
- •Encode underwriting rules in graph nodes while keeping every exception path visible in LangSmith traces.
- •Agent debugging for regulated workflows
- •Inspect why an agent declined an application by reviewing node-level inputs, outputs, and branching decisions in LangSmith.
If you are building lending agents that need both control flow and observability, this pairing is practical. Use LangGraph to make the workflow deterministic where it matters, then use LangSmith to prove what happened at every step.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit