How to Build a customer support Agent Using LangGraph in TypeScript for lending
A lending support agent handles the repetitive, high-friction questions that flood servicing teams: payment dates, payoff quotes, application status, document requests, hardship options, and escalation to a human when the case crosses a compliance line. It matters because lending is not generic customer support; every answer can affect disclosures, fair lending exposure, and auditability.
Architecture
- •
Chat entrypoint
- •Receives the borrower message plus account context.
- •Normalizes identity state before any account-specific answer is generated.
- •
Policy router
- •Classifies the request into safe buckets:
- •status lookup
- •payment question
- •payoff request
- •hardship / forbearance
- •complaint / dispute
- •regulated advice or adverse action
- •Routes high-risk cases to human review.
- •Classifies the request into safe buckets:
- •
Retrieval layer
- •Pulls from approved sources only:
- •loan servicing system
- •knowledge base
- •policy documents
- •disclosure templates
- •Never lets the model invent loan terms.
- •Pulls from approved sources only:
- •
Response composer
- •Uses an LLM to draft the reply from retrieved facts.
- •Applies tone rules and required disclosures.
- •
Audit logger
- •Stores inputs, route decisions, retrieved evidence, and final output.
- •Needed for compliance review and dispute handling.
- •
Human handoff node
- •Escalates cases requiring identity verification, legal review, or exceptions.
- •Preserves conversation state for the agent who takes over.
Implementation
1) Define the graph state and supporting types
For lending support, your state needs more than chat history. You need routing metadata, retrieval results, and an audit trail that can survive a regulator asking “why did the bot say this?”
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
type Route = "status" | "payment" | "payoff" | "hardship" | "complaint" | "handoff";
const AgentState = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
route: Annotation<Route | null>({
reducer: (_, next) => next,
default: () => null,
}),
customerId: Annotation<string | null>({
reducer: (_, next) => next,
default: () => null,
}),
facts: Annotation<Record<string, any>>({
reducer: (left, right) => ({ ...left, ...right }),
default: () => ({}),
}),
audit: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
2) Add a router that blocks risky lending cases
This is where you keep the model honest. If the user asks for hardship eligibility or complains about adverse action, route out of automation unless your policy explicitly allows a templated response.
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
async function classifyRoute(state: typeof AgentState.State) {
const last = state.messages[state.messages.length - 1];
const text = typeof last?.content === "string" ? last.content.toLowerCase() : "";
let route: Route = "handoff";
if (text.includes("payoff")) route = "payoff";
else if (text.includes("payment") || text.includes("due date")) route = "payment";
else if (text.includes("status") || text.includes("application")) route = "status";
else if (text.includes("hardship") || text.includes("forbearance")) route = "hardship";
else if (text.includes("complaint") || text.includes("dispute")) route = "complaint";
return {
route,
audit: [
{
step: "classifyRoute",
input: text,
decision: route,
timestamp: new Date().toISOString(),
},
],
};
}
3) Retrieve approved facts and compose the answer
The key pattern is simple: fetch facts first, then ask the model to answer only from those facts. In lending support, that keeps you out of hallucinated balances and invented policy language.
async function retrieveFacts(state: typeof AgentState.State) {
const customerId = state.customerId;
if (!customerId) {
return {
route: "handoff" as const,
audit: [{ step: "retrieveFacts", reason: "missing_customer_id" }],
};
}
// Replace with real calls to servicing systems / KB / policy store.
const facts = {
loanStatus: "Current",
nextPaymentDate: "2026-05-01",
payoffQuoteExpiresAt: "2026-04-30",
hardshipPolicyRef: "POL-HRD-014",
dataResidencyRegionRequired: "us-east-1",
};
return {
facts,
audit: [{ step: "retrieveFacts", customerId }],
};
}
async function composeAnswer(state: typeof AgentState.State) {
const system = `
You are a lending customer support agent.
Use only provided facts.
Do not guess balances, APRs, fees, or legal eligibility.
If asked about hardship/forbearance/complaints/legal issues, recommend human review.
Include required disclosures when discussing payoff quotes or payment timing.
`;
const promptFacts = JSON.stringify(state.facts);
const response = await llm.invoke([
{ role: "system", content: system },
...state.messages,
{ role: "system", content: `Approved facts:\n${promptFacts}` },
]);
return {
messages: [new AIMessage(response.content as string)],
audit: [
{
step: "composeAnswer",
model: "gpt-4o-mini",
usedFactsKeys:
Object.keys(state.facts),
timestamp:
new Date().toISOString(),
},
],
};
}
4) Wire routing with StateGraph, then compile and run
This is the actual LangGraph pattern you want in production. The graph decides whether to answer directly or escalate based on the classification result.
const graph = new StateGraph(AgentState)
.addNode("classifyRoute", classifyRoute)
.addNode("retrieveFacts", retrieveFacts)
.addNode("composeAnswer", composeAnswer)
.addEdge(START, "classifyRoute")
.addConditionalEdges("classifyRoute", (state) => state.route === "handoff" ? END : "retrieveFacts")
.addConditionalEdges("retrieveFacts", (state) =>
state.route === "hardship" || state.route === "complaint" ? END : "composeAnswer"
)
.addEdge("composeAnswer", END);
const app = graph.compile();
async function main() {
const result = await app.invoke({
messages: [new HumanMessage("What is my next payment date?")],
customerId: "cust_12345",
});
console.log(result.messages[result.messages.length -1]);
}
main();
Production Considerations
- •
Audit everything
- •Persist route decisions, retrieved records IDs, prompt version, model version, and final output.
- •In lending disputes you need replayable traces.
- •
Enforce data residency
- •Keep borrower PII and loan data in approved regions only.
- •If your bank requires US-only processing for servicing data, don’t send it to an unrestricted endpoint.
- •
Add guardrails around regulated topics
- •Hardship options, collections language, adverse action explanations, and legal interpretations should trigger handoff unless pre-approved templates exist.
- •Don’t let the model improvise compliance wording.
- •
Monitor answer quality by intent
| Intent | Risk | Metric |
|---|---|---|
| Payoff quote | Financial harm | % answers with approved quote source |
| Payment status | Low-medium | First-contact resolution |
| Hardship request | Compliance risk | Handoff rate |
| Complaint/dispute | Legal risk | Escalation SLA |
Common Pitfalls
- •
Letting the LLM answer before retrieval
- •This causes invented balances and wrong dates.
- •Fix it by making retrieval a mandatory node before response generation.
- •
Treating all intents as safe
- •Lending has regulated categories that should never be fully automated without controls.
- •Fix it with explicit routing for hardship, complaints, disputes, and legal questions.
- •
Skipping traceability
- •If you can’t show what data produced an answer, you’ll struggle in audits.
- •Fix it by storing every graph step with timestamps and source references.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit