How to Build a customer support Agent Using LangGraph in Python for lending

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphpythonlending

A customer support agent for lending handles borrower questions about applications, repayment, due dates, payoff quotes, hardship options, and document requests. It matters because lending support is not just FAQ automation; every answer can affect compliance, customer trust, and the borrower’s financial outcome.

Architecture

  • Intent router
    • Classifies the user’s request: application status, payment schedule, payoff quote, hardship, complaint, or escalation.
  • Policy layer
    • Blocks unsupported advice, enforces regulated-response rules, and routes sensitive topics to a human.
  • Loan context fetcher
    • Pulls borrower-specific data from core lending systems: account status, balance, delinquency state, payment history.
  • Response composer
    • Generates a grounded answer using only approved policy text and fetched loan data.
  • Audit logger
    • Stores the user question, routing decision, retrieved facts, and final response for compliance review.
  • Human handoff
    • Escalates when the request is ambiguous, high-risk, or requires exception handling.

Implementation

1) Define the graph state and helper functions

Use a typed state so each node knows what it can read and write. For lending support, keep the state narrow: user message, intent, retrieved loan facts, response text, and escalation flag.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class SupportState(TypedDict):
    messages: Annotated[list, add_messages]
    intent: str
    loan_data: dict
    response: str
    escalate: bool

def classify_intent(state: SupportState) -> SupportState:
    text = state["messages"][-1].content.lower()
    if "payoff" in text:
        intent = "payoff_quote"
    elif "late" in text or "delinquent" in text:
        intent = "delinquency"
    elif "hardship" in text or "forbearance" in text:
        intent = "hardship"
    elif "application" in text or "status" in text:
        intent = "application_status"
    else:
        intent = "general_support"
    return {**state, "intent": intent}

2) Add retrieval and policy checks

In production you would call your lending platform here. The key pattern is that the graph fetches facts before generating any response. That keeps the model grounded and reduces hallucinated balances or dates.

def fetch_loan_data(state: SupportState) -> SupportState:
    # Replace with real API calls to your LOS/core banking system
    mock_loan = {
        "account_id": "LN-10422",
        "status": "current",
        "next_due_date": "2026-05-01",
        "outstanding_balance": 18450.72,
        "payoff_quote_valid_until": "2026-04-30",
        "hardship_eligible": True,
    }
    return {**state, "loan_data": mock_loan}

def policy_gate(state: SupportState) -> SupportState:
    text = state["messages"][-1].content.lower()
    risky_terms = ["legal advice", "dispute reporting", "credit score guarantee"]
    escalate = any(term in text for term in risky_terms)
    return {**state, "escalate": escalate}

3) Generate the customer response

Keep this node deterministic where possible. For lending support agents, template-based responses are safer than free-form generation for common tasks like payoff quotes or due dates.

def compose_response(state: SupportState) -> SupportState:
    if state["escalate"]:
        response = (
            "I’m routing this to a specialist because this request needs manual review."
        )
        return {**state, "response": response}

    loan = state["loan_data"]
    intent = state["intent"]

    if intent == "payoff_quote":
        response = (
            f"Your current outstanding balance is ${loan['outstanding_balance']:.2f}. "
            f"If you need a payoff quote valid through {loan['payoff_quote_valid_until']}, "
            f"I can connect you to the servicing team."
        )
    elif intent == "delinquency":
        response = (
            f"Your account is currently {loan['status']}. "
            f"Your next due date is {loan['next_due_date']}."
        )
    elif intent == "hardship":
        response = (
            f"Your account may be eligible for hardship review. "
            f"A specialist can confirm options based on your file."
        )
    else:
        response = (
            f"I found your account ending in {loan['account_id'][-4:]}. "
            f"What do you need help with?"
        )

    return {**state, "response": response}

4) Wire the LangGraph workflow

This is the actual LangGraph pattern: create a StateGraph, add nodes and edges, then compile it into an executable app. The conditional edge sends high-risk requests to human handoff.

def route_after_policy(state: SupportState) -> str:
    return "handoff" if state["escalate"] else "compose"

def handoff(state: SupportState) -> SupportState:
    return {
        **state,
        "response": (
            "I’ve escalated this request to a licensed specialist for review."
        ),
    }

graph = StateGraph(SupportState)

graph.add_node("classify_intent", classify_intent)
graph.add_node("fetch_loan_data", fetch_loan_data)
graph.add_node("policy_gate", policy_gate)
graph.add_node("compose", compose_response)
graph.add_node("handoff", handoff)

graph.add_edge(START, "classify_intent")
graph.add_edge("classify_intent", "fetch_loan_data")
graph.add_edge("fetch_loan_data", "policy_gate")
graph.add_conditional_edges("policy_gate", route_after_policy)
graph.add_edge("compose", END)
graph.add_edge("handoff", END)

app = graph.compile()

Run it with a message list:

from langchain_core.messages import HumanMessage

result = app.invoke(
    {"messages": [HumanMessage(content="Can you give me my payoff quote?")],
     "intent": "",
     "loan_data": {},
     "response": "",
     "escalate": False}
)

print(result["response"])

Production Considerations

  • Audit every decision path
    • Log the raw user message, detected intent, retrieved loan fields used in the answer, escalation reason, and final output. In lending support, auditability matters as much as correctness.
  • Keep data residency explicit
    • If borrower data must stay in-region, make sure your retrieval layer and model endpoints are deployed accordingly. Don’t send servicing PII to a third-party region by accident.
  • Add hard guardrails for regulated topics
    • Hardcode escalation for disputes about credit reporting accuracy, legal threats, bankruptcy questions, and complaints about discrimination. Those should not be answered by free-form generation.
  • Version your policy prompts and templates
    • Treat customer-facing wording like code. A small wording change can create compliance risk if it changes how hardship eligibility or payment relief is described.

Common Pitfalls

  • Using the LLM before fetching loan facts
    • This leads to hallucinated balances and due dates. Fetch from source systems first; generate second.
  • Letting one generic prompt handle every lending scenario
    • Payment status questions are low risk. Hardship requests and credit reporting disputes are not. Split them into separate paths with different policies.
  • Skipping escalation metadata
    • If you hand off to an agent without intent labels and retrieved facts attached to the case record, humans start over. Pass structured context into your CRM or ticketing system.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides