How to Integrate LangGraph for lending with Redis for RAG

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-lendingredisrag

Combining LangGraph for lending with Redis for RAG gives you a clean way to build loan workflows that can retrieve policy, product, and customer context on demand. LangGraph handles the stateful decision flow for lending, while Redis gives you low-latency retrieval over embeddings and metadata so the agent can answer with grounded context instead of guessing.

Prerequisites

  • Python 3.10+
  • A Redis Stack instance with RediSearch enabled
  • A LangGraph-based lending workflow already defined, or at least a graph skeleton
  • OpenAI or another embedding provider for vector generation
  • langgraph, redis, and your LLM/embedding SDK installed

Install the core packages:

pip install langgraph redis openai

Set your environment variables:

export REDIS_URL="redis://localhost:6379"
export OPENAI_API_KEY="your-key"

Integration Steps

  1. Create a Redis client and index your lending knowledge base

Use Redis as the retrieval layer for policy docs, underwriting rules, and product FAQs. Store embeddings in Redis hashes and query them through RediSearch.

import os
from redis import Redis
from openai import OpenAI

redis_client = Redis.from_url(os.environ["REDIS_URL"], decode_responses=True)
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def embed(text: str) -> list[float]:
    resp = client.embeddings.create(
        model="text-embedding-3-small",
        input=text,
    )
    return resp.data[0].embedding

docs = [
    {"id": "loan_policy_001", "text": "Debt-to-income ratio must be below 43% for standard personal loans."},
    {"id": "loan_policy_002", "text": "Self-employed applicants require 24 months of bank statements."},
]

for doc in docs:
    redis_client.hset(
        f"doc:{doc['id']}",
        mapping={
            "text": doc["text"],
            "embedding": str(embed(doc["text"])),
            "type": "policy",
        },
    )
  1. Build a Redis-backed retriever for RAG

For production, you want retrieval to be explicit inside your graph state. Query Redis first, then pass only the top matches into the LLM node.

import json
import numpy as np

def cosine_similarity(a, b):
    a = np.array(a)
    b = np.array(b)
    return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))

def retrieve_context(query: str, limit: int = 3):
    q_emb = embed(query)
    results = []

    for key in redis_client.scan_iter("doc:*"):
        doc = redis_client.hgetall(key)
        if not doc:
            continue
        doc_emb = json.loads(doc["embedding"].replace("'", '"'))
        score = cosine_similarity(q_emb, doc_emb)
        results.append((score, doc["text"]))

    results.sort(reverse=True, key=lambda x: x[0])
    return [text for _, text in results[:limit]]
  1. Wire retrieval into a LangGraph lending workflow

LangGraph’s StateGraph lets you keep application state across underwriting steps. Add a retrieval node that enriches the loan application before decisioning.

from typing import TypedDict, List
from langgraph.graph import StateGraph, START, END

class LendingState(TypedDict):
    applicant_name: str
    query: str
    retrieved_context: List[str]
    decision: str

def rag_node(state: LendingState):
    context = retrieve_context(state["query"])
    return {"retrieved_context": context}

def decision_node(state: LendingState):
    context_blob = "\n".join(state.get("retrieved_context", []))
    if "43%" in context_blob:
        return {"decision": "needs_manual_review"}
    return {"decision": "approve"}

graph = StateGraph(LendingState)
graph.add_node("rag", rag_node)
graph.add_node("decision", decision_node)

graph.add_edge(START, "rag")
graph.add_edge("rag", "decision")
graph.add_edge("decision", END)

app = graph.compile()
  1. Add a lending request path that passes borrower questions into RAG

This is where the integration becomes useful. The borrower question or underwriting note becomes the retrieval query, and the graph uses retrieved policy snippets to guide the next action.

result = app.invoke(
    {
        "applicant_name": "Amina Patel",
        "query": "What is the minimum requirement for self-employed borrowers?",
        "retrieved_context": [],
        "decision": "",
    }
)

print(result["retrieved_context"])
print(result["decision"])
  1. Persist graph outputs back into Redis for auditability

In lending, you need traceability. Store decisions, retrieved evidence, and timestamps in Redis so compliance teams can inspect what the agent used.

from datetime import datetime

def store_audit_record(application_id: str, state: dict):
    redis_client.hset(
        f"audit:{application_id}",
        mapping={
            "applicant_name": state["applicant_name"],
            "query": state["query"],
            "retrieved_context": json.dumps(state.get("retrieved_context", [])),
            "decision": state["decision"],
            "timestamp": datetime.utcnow().isoformat(),
        },
    )

final_state = app.invoke(
    {
        "applicant_name": "Amina Patel",
        "query": "What is the minimum requirement for self-employed borrowers?",
        "retrieved_context": [],
        "decision": "",
    }
)

store_audit_record("app_10001", final_state)

Testing the Integration

Run this quick verification to confirm Redis retrieval is feeding LangGraph correctly.

test_state = app.invoke(
    {
        "applicant_name": "John Doe",
        "query": "What documents are needed for self-employed applicants?",
        "retrieved_context": [],
        "decision": "",
    }
)

print("Retrieved:", test_state["retrieved_context"])
print("Decision:", test_state["decision"])

Expected output:

Retrieved: ['Self-employed applicants require 24 months of bank statements.']
Decision: needs_manual_review

If you get an empty context list, check these first:

  • Your Redis documents were written successfully
  • The embedding format is valid JSON-compatible numeric arrays
  • Your query text actually matches indexed policy language closely enough

Real-World Use Cases

  • Loan pre-screening assistant that answers borrower questions using current policy docs stored in Redis while LangGraph routes cases to approve, reject, or review.
  • Underwriting copilot that retrieves internal lending rules and attaches them to each application before generating a recommendation.
  • Compliance audit trail where every agent decision stores retrieved evidence in Redis for later review by risk and legal teams.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides