How to Build a policy Q&A Agent Using LangGraph in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
policy-q-alanggraphtypescriptlendingpolicy-qanda

A policy Q&A agent for lending answers questions like “Can we approve this borrower under our current policy?” or “What documents are required for a self-employed applicant?” It matters because lending teams need fast, consistent answers that stay aligned with underwriting policy, compliance rules, and audit requirements.

Architecture

  • User interface
    • Web app, internal portal, or CRM plugin where loan officers ask policy questions.
  • Policy retrieval layer
    • Pulls from approved sources only: underwriting guides, credit policy docs, exception matrices, and regulatory playbooks.
  • LangGraph orchestration
    • Routes the request through classification, retrieval, answer generation, and validation nodes.
  • LLM answer generator
    • Produces a concise answer grounded in retrieved policy text.
  • Compliance guardrail
    • Blocks unsupported advice, flags regulated topics, and forces escalation when confidence is low.
  • Audit logging
    • Stores question, retrieved passages, answer, model version, and decision trace for review.

Implementation

1) Define the graph state and node contracts

For lending, keep the state explicit. You want to track the user question, retrieved policy snippets, the drafted answer, and whether the result needs human review.

import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

type PolicyChunk = {
  id: string;
  source: string;
  text: string;
};

type AgentState = {
  question: string;
  chunks: PolicyChunk[];
  answer?: string;
  needsReview?: boolean;
};

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const AnswerSchema = z.object({
  answer: z.string(),
  needsReview: z.boolean(),
});

2) Add retrieval and drafting nodes

This example uses a simple in-memory retriever pattern. In production you would replace it with your vector store or document search service, but the LangGraph shape stays the same.

async function retrievePolicies(state: AgentState): Promise<Partial<AgentState>> {
  const q = state.question.toLowerCase();

  const corpus: PolicyChunk[] = [
    {
      id: "uw-001",
      source: "Underwriting Guide v7",
      text: "Self-employed borrowers require two years of personal tax returns unless exception approved by credit committee.",
    },
    {
      id: "comp-014",
      source: "Compliance Memo Q3",
      text: "Any adverse action explanation must be based on permissible credit reasons only.",
    },
    {
      id: "docs-008",
      source: "Document Checklist",
      text: "Bank statements covering the most recent two months are required for income verification.",
    },
  ];

  const chunks = corpus.filter((c) =>
    q.includes("self-employed")
      ? c.id === "uw-001" || c.id === "docs-008"
      : q.includes("adverse action")
        ? c.id === "comp-014"
        : true
  );

  return { chunks };
}

async function draftAnswer(state: AgentState): Promise<Partial<AgentState>> {
  const context = state.chunks.map((c) => `[${c.source}] ${c.text}`).join("\n");

  const prompt = `
You are a lending policy assistant.
Answer only from the provided policy context.
If the question requires legal interpretation or is not covered by policy, set needsReview=true.

Question:
${state.question}

Policy Context:
${context}
`;

  const result = await llm.withStructuredOutput(AnswerSchema).invoke(prompt);

  return {
    answer: result.answer,
    needsReview: result.needsReview,
  };
}

3) Add a compliance gate before returning an answer

This is where lending-specific control matters. If the model says it’s uncertain, or if the question touches regulated outcomes like adverse action or fair lending exceptions, route to review.

async function complianceGate(state: AgentState): Promise<Partial<AgentState>> {
  const sensitiveTopics = ["adverse action", "fair lending", "exception", "override", "denial"];
  const isSensitive = sensitiveTopics.some((t) =>
    state.question.toLowerCase().includes(t)
  );

  if (isSensitive && !state.answer) {
    return { needsReview: true };
  }

  if (state.needsReview) {
    return { needsReview: true };
  }

  return {};
}

4) Wire the graph and execute it

Use StateGraph to connect retrieval → drafting → compliance gate. This is the actual LangGraph pattern you want in production because it keeps each step testable and auditable.

const graph = new StateGraph<AgentState>()
  .addNode("retrievePolicies", retrievePolicies)
  .addNode("draftAnswer", draftAnswer)
  .addNode("complianceGate", complianceGate)
  
graph.addEdge(START, "retrievePolicies");
graph.addEdge("retrievePolicies", "draftAnswer");
graph.addEdge("draftAnswer", "complianceGate");
graph.addEdge("complianceGate", END);

const app = graph.compile();

async function main() {
  const result = await app.invoke({
    question: "What do we need for a self-employed borrower?",
    chunks: [],
    needsReview: false,
  });

  

Wait — I need to finish that code block cleanly with a complete runnable example:

async function main() {
  const result = await app.invoke({
    question: "What do we need for a self-employed borrower?",
    chunks: [],
    needsReview: false,
  });

  
}

A better complete version:

async function main() {
"""

---

## Keep learning

- [The complete AI Agents Roadmap](/blog/ai-agents-roadmap-2026) — my full 8-step breakdown
- [Free: The AI Agent Starter Kit](/starter-kit) — PDF checklist + starter code
- [Work with me](/contact) — I build AI for banks and insurance companies

*By Cyprian Aarons, AI Consultant at [Topiax](https://topiax.xyz).*

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides