How to Integrate LangGraph for lending with LangSmith for multi-agent systems
Integrating LangGraph for lending with LangSmith gives you a clean way to build regulated, multi-agent lending workflows and actually observe what those agents are doing. The practical win is simple: you can orchestrate loan intake, credit checks, policy review, and exception handling in LangGraph, then trace every step, prompt, tool call, and failure in LangSmith.
For lending systems, that matters because the workflow is rarely linear. You need multiple agents with different responsibilities, deterministic handoffs, and enough observability to debug why a loan was approved, rejected, or escalated.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An API key for your model provider
- •A LangSmith account and project created
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=<your-project-name> - •
OPENAI_API_KEYor your preferred model key
- •
Install the packages:
pip install langgraph langchain langsmith langchain-openai
Integration Steps
- •
Set up LangSmith tracing before building the graph
LangSmith works best when tracing is enabled at process startup. That way every node execution in your lending workflow gets captured without extra plumbing.
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "lending-multi-agent"
os.environ["LANGSMITH_API_KEY"] = os.getenv("LANGSMITH_API_KEY")
- •
Create the lending agents as LangGraph nodes
In a lending flow, keep each responsibility isolated. A common pattern is one node for intake normalization, one for affordability assessment, one for policy/compliance review, and one final decision node.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
class LendingState(TypedDict):
applicant_name: str
income: float
requested_amount: float
debt_to_income: float
risk_flag: str
decision: str
def normalize_input(state: LendingState) -> LendingState:
return state
def assess_affordability(state: LendingState) -> LendingState:
dti = state["requested_amount"] / max(state["income"], 1)
state["debt_to_income"] = round(dti, 2)
state["risk_flag"] = "high" if dti > 0.4 else "medium"
return state
def compliance_review(state: LendingState) -> LendingState:
prompt = f"""
Review this lending case for policy concerns:
applicant={state['applicant_name']}
income={state['income']}
requested_amount={state['requested_amount']}
dti={state['debt_to_income']}
risk_flag={state['risk_flag']}
Return only APPROVE or ESCALATE.
"""
result = llm.invoke(prompt).content.strip().upper()
state["decision"] = result if result in {"APPROVE", "ESCALATE"} else "ESCALATE"
return state
- •
Wire the agents into a LangGraph workflow
This is where multi-agent orchestration becomes explicit. You define the path through the graph instead of burying logic inside one large prompt.
workflow = StateGraph(LendingState)
workflow.add_node("normalize_input", normalize_input)
workflow.add_node("assess_affordability", assess_affordability)
workflow.add_node("compliance_review", compliance_review)
workflow.set_entry_point("normalize_input")
workflow.add_edge("normalize_input", "assess_affordability")
workflow.add_edge("assess_affordability", "compliance_review")
workflow.add_edge("compliance_review", END)
app = workflow.compile()
- •
Run the graph with tracing enabled in LangSmith
Once tracing is on, every invocation shows up in LangSmith as a run tree. That gives you node-level visibility into how the lending decision was produced.
input_state = {
"applicant_name": "Amina Ndlovu",
"income": 5000.0,
"requested_amount": 1800.0,
"debt_to_income": 0.0,
"risk_flag": "",
"decision": ""
}
result = app.invoke(input_state)
print(result)
- •
Add custom LangSmith metadata for auditability
For regulated lending flows, trace metadata is not optional. Tagging runs with case IDs, product type, or branch code makes investigation much easier when compliance asks why a specific application was escalated.
from langsmith import Client
client = Client()
run = client.create_run(
name="lending-case-review",
run_type="chain",
inputs={"case_id": "LN-10422"},
project_name="lending-multi-agent",
tags=["lending", "multi-agent", "review"],
)
print(run.id)
Testing the Integration
Use a small deterministic test case and verify both output and trace capture.
test_case = {
"applicant_name": "John Mensah",
"income": 10000.0,
"requested_amount": 2500.0,
"debt_to_income": 0.0,
"risk_flag": "",
"decision": ""
}
result = app.invoke(test_case)
print("Decision:", result["decision"])
print("DTI:", result["debt_to_income"])
print("Risk:", result["risk_flag"])
Expected output:
Decision: APPROVE
DTI: 0.25
Risk: medium
In LangSmith, you should see a trace containing:
- •The graph invocation
- •Each node execution in order
- •The model call inside
compliance_review - •Input/output payloads for debugging
If the trace does not appear:
- •Confirm
LANGSMITH_TRACING=true - •Confirm
LANGSMITH_API_KEYis set correctly - •Confirm your project name matches
LANGSMITH_PROJECT - •Check that your LLM provider key is valid
Real-World Use Cases
- •
Loan origination workflows
- •Use separate agents for identity validation, income verification, affordability scoring, and policy review.
- •Trace every step so credit ops can audit decisions later.
- •
Exception handling for borderline applications
- •Route high-risk cases to an escalation agent that asks for additional documents or hands off to a human underwriter.
- •Use LangSmith traces to inspect where the workflow branched.
- •
Portfolio monitoring assistants
- •Run multi-agent checks across active loans to detect payment risk signals, covenant breaches, or refinance opportunities.
- •Keep observability on each agent’s reasoning path so false positives are easier to diagnose.
The pattern here is straightforward: use LangGraph to control how lending agents collaborate, and use LangSmith to prove what happened at runtime. That combination gives you orchestration plus auditability, which is what production lending systems need.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit