How to Integrate LangGraph for lending with LangSmith for production AI
Integrating LangGraph for lending with LangSmith gives you a production-grade way to build, trace, and debug lending agents that make repeatable decisions. You get workflow control for loan intake, eligibility checks, document review, and exception handling, plus observability for every node execution, prompt, tool call, and failure path.
For lending systems, that matters because you need auditability. If an application is declined or routed for manual review, you need to know exactly which step made that decision and why.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An OpenAI-compatible model key or another chat model provider
- •A LangSmith account and API key
- •Environment variables configured:
- •
LANGCHAIN_TRACING_V2=true - •
LANGCHAIN_API_KEY=... - •
LANGCHAIN_PROJECT=... - •
OPENAI_API_KEY=...
- •
Install the packages:
pip install langgraph langchain langsmith langchain-openai
Integration Steps
- •Set up LangSmith tracing first.
LangSmith tracing is controlled through environment variables. If you do not set them before running your graph, you will not get traces in the dashboard.
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "lsv2_********************************"
os.environ["LANGCHAIN_PROJECT"] = "lending-agent-prod"
os.environ["OPENAI_API_KEY"] = "sk-********************************"
- •Build a simple lending graph with LangGraph.
This example uses a state machine for basic loan triage: collect applicant data, score the application, then route to approve or review.
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
class LendingState(TypedDict):
income: int
debt: int
credit_score: int
decision: str
def score_application(state: LendingState) -> LendingState:
dti = state["debt"] / max(state["income"], 1)
if state["credit_score"] >= 700 and dti < 0.35:
state["decision"] = "approve"
elif state["credit_score"] >= 640 and dti < 0.5:
state["decision"] = "manual_review"
else:
state["decision"] = "decline"
return state
def route(state: LendingState) -> Literal["approve", "manual_review", "decline"]:
return state["decision"]
def approve(state: LendingState) -> LendingState:
return state
def manual_review(state: LendingState) -> LendingState:
return state
def decline(state: LendingState) -> LendingState:
return state
graph = StateGraph(LendingState)
graph.add_node("score_application", score_application)
graph.add_node("approve", approve)
graph.add_node("manual_review", manual_review)
graph.add_node("decline", decline)
graph.add_edge(START, "score_application")
graph.add_conditional_edges("score_application", route)
graph.add_edge("approve", END)
graph.add_edge("manual_review", END)
graph.add_edge("decline", END)
app = graph.compile()
- •Attach LangSmith tracing to the graph run.
LangGraph runs are traced automatically when LangSmith is enabled through environment variables. To make the trace easier to inspect in production, pass metadata and tags at invocation time.
result = app.invoke(
{
"income": 120000,
"debt": 28000,
"credit_score": 712,
"decision": ""
},
config={
"tags": ["lending", "production", "triage"],
"metadata": {
"customer_type": "retail",
"region": "us-east-1",
"workflow": "loan_precheck"
}
}
)
print(result)
- •Add a traced LLM node for explanation generation.
In production lending flows, the decision alone is not enough. You usually need a short explanation that can be reviewed by operations or shown to an underwriter. Use a LangChain chat model inside the graph so LangSmith captures prompt/response traces too.
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def explain_decision(state: LendingState) -> dict:
prompt = [
SystemMessage(content="You are a lending operations assistant."),
HumanMessage(
content=f"""
Applicant profile:
Income: {state['income']}
Debt: {state['debt']}
Credit score: {state['credit_score']}
Decision: {state['decision']}
Write one concise operational explanation.
"""
),
]
response = llm.invoke(prompt)
return {"explanation": response.content}
You can then add this node into your graph before END so both the deterministic rule path and the LLM explanation are traced in LangSmith.
- •Run with structured callbacks and inspect traces in LangSmith.
If you want explicit control over run grouping in larger systems, use RunnableConfig style metadata on calls into your graph or downstream chains. This keeps each loan case searchable in LangSmith by application ID or customer segment.
from langchain_core.runnables import RunnableConfig
config = RunnableConfig(
tags=["loan-origination"],
metadata={
"application_id": "app_10291",
"product": "personal_loan",
"channel": "web"
}
)
result = app.invoke(
{
"income": 95000,
"debt": 18000,
"credit_score": 685,
"decision": ""
},
config=config
)
print(result)
Testing the Integration
Use one known-good application and confirm two things:
- •The graph returns the expected decision
- •A trace appears in your LangSmith project with the same tags and metadata
test_input = {
"income": 150000,
"debt": 20000,
"credit_score": 740,
"decision": ""
}
output = app.invoke(
test_input,
config={
"tags": ["smoke-test", "lending"],
"metadata": {"application_id": "test-001"}
}
)
print(output)
Expected output:
{
'income': 150000,
'debt': 20000,
'credit_score': 740,
'decision': 'approve'
}
In LangSmith, you should see:
- •One trace for the graph run
- •Node-level spans for each step
- •Tags like
smoke-testandlending - •Metadata including
application_id=test-001
Real-World Use Cases
- •Loan prequalification agents that screen applicants, generate explanations, and escalate borderline cases to human underwriters.
- •Document-processing workflows that extract income statements, verify consistency across sources, and log every step for compliance review.
- •Collections assistants that decide whether to send reminders, offer restructuring options, or route accounts to agents based on policy rules and model outputs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit