How to Integrate LangGraph for lending with LangSmith for RAG
Combining LangGraph for lending with LangSmith gives you two things most RAG systems in regulated lending need: workflow control and observability. LangGraph handles the loan-specific state machine, branching, and human review steps, while LangSmith gives you trace-level visibility into retrieval quality, prompt behavior, and model outputs.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An LLM provider key, such as OpenAI or Anthropic
- •A LangSmith account and API key
- •A vector store or retriever for your lending documents
- •A local
.envor secrets manager for credentials
Install the packages:
pip install langgraph langchain langsmith langchain-openai python-dotenv
Set environment variables:
export LANGSMITH_API_KEY="lsv2_..."
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="lending-rag"
export OPENAI_API_KEY="sk-..."
Integration Steps
- •
Create a lending RAG retriever
Start with a retriever that can pull policy docs, underwriting rules, product terms, and compliance notes. In lending, retrieval quality matters more than fancy prompting because wrong context becomes a business risk.
from langchain_openai import OpenAIEmbeddings
from langchain_core.documents import Document
from langchain_community.vectorstores import FAISS
docs = [
Document(page_content="Debt-to-income ratio must be below 43% for standard personal loans."),
Document(page_content="Manual review is required if credit score is below 620."),
Document(page_content="Income verification is mandatory for loan amounts above $50,000."),
]
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 2})
- •
Build a LangGraph workflow for the lending agent
Use LangGraph to define the application flow: retrieve policy context, generate an answer, then route to review if confidence is low or the case is risky. This is where LangGraph fits better than a plain chain.
from typing import TypedDict, List
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
class LendingState(TypedDict):
question: str
context: List[str]
answer: str
def retrieve(state: LendingState):
docs = retriever.invoke(state["question"])
return {"context": [d.page_content for d in docs]}
def generate(state: LendingState):
prompt = (
"Answer using only the retrieved lending policy context.\n\n"
f"Context:\n{chr(10).join(state['context'])}\n\n"
f"Question: {state['question']}"
)
response = llm.invoke([HumanMessage(content=prompt)])
return {"answer": response.content}
graph = StateGraph(LendingState)
graph.add_node("retrieve", retrieve)
graph.add_node("generate", generate)
graph.add_edge(START, "retrieve")
graph.add_edge("retrieve", "generate")
graph.add_edge("generate", END)
app = graph.compile()
- •
Enable LangSmith tracing for the graph run
LangSmith picks up traces when tracing is enabled through environment variables. If you want explicit project control inside code, use
langsmith.Clientand ensure your runs land in the right workspace.
import os
from langsmith import Client
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "lending-rag"
client = Client()
result = app.invoke(
{"question": "Can I approve a $60k loan with no income verification?"},
config={"run_name": "lending-rag-check"}
)
print(result["answer"])
- •
Add metadata so you can debug lending decisions in LangSmith
In production, trace names alone are not enough. Attach metadata like product type, risk tier, channel, and decision stage so you can filter traces later when compliance asks why a case was routed to manual review.
result = app.invoke(
{"question": "Does this applicant qualify with a 610 credit score?"},
config={
"run_name": "credit-policy-rag",
"tags": ["lending", "rag", "underwriting"],
"metadata": {
"product": "personal_loan",
"risk_tier": "standard",
"region": "us"
}
}
)
- •
Add evaluation-ready outputs for LangSmith
If you want to measure retrieval and answer quality over time, structure outputs consistently. That makes it easier to build LangSmith datasets and run automated evals against policy questions.
def generate(state: LendingState):
prompt = (
"Answer using only the retrieved lending policy context.\n\n"
f"Context:\n{chr(10).join(state['context'])}\n\n"
f"Question: {state['question']}"
)
response = llm.invoke([HumanMessage(content=prompt)])
return {
"answer": response.content,
"sources": state["context"]
}
Testing the Integration
Run a single question through the graph and confirm you get both an answer and a trace in LangSmith.
test_input = {
"question": "Is income verification required for loans above $50,000?"
}
output = app.invoke(
test_input,
config={
"run_name": "integration-test",
"tags": ["test", "lending-rag"]
}
)
print("Answer:", output["answer"])
print("Sources:", output["sources"])
Expected output:
Answer: Income verification is mandatory for loan amounts above $50,000.
Sources: ['Income verification is mandatory for loan amounts above $50,000.', 'Debt-to-income ratio must be below 43% for standard personal loans.']
If tracing is configured correctly, you should also see the run in LangSmith under your lending-rag project with separate spans for retrieval and generation.
Real-World Use Cases
- •
Loan pre-screening assistant
- •Answer borrower eligibility questions from policy docs.
- •Route edge cases to manual underwriting based on graph branches.
- •
Compliance-aware document Q&A
- •Let ops teams query lending policies without searching PDFs.
- •Use LangSmith traces to audit which sources drove each answer.
- •
Underwriting support agent
- •Pull relevant credit policy snippets before generating recommendations.
- •Track hallucinations and retrieval misses with LangSmith evaluations over time.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit