How to Build a underwriting Agent Using LangChain in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
underwritinglangchaintypescriptlending

An underwriting agent for lending takes applicant data, pulls in policy and document context, and produces a structured credit recommendation with reasons. It matters because lending decisions need to be fast, consistent, and auditable, not just “smart”; if the agent cannot explain its output or respect policy constraints, it is not usable in production.

Architecture

  • Input normalization layer
    • Converts application payloads, bank statements, payroll data, and KYC fields into a consistent internal schema.
  • Document retrieval layer
    • Uses VectorStoreRetriever or a policy search tool to fetch underwriting rules, product guidelines, and exceptions.
  • Decisioning chain
    • Calls an LLM through LangChain to classify risk, extract missing fields, and draft a recommendation.
  • Rules engine
    • Applies hard constraints outside the model: DTI caps, minimum income thresholds, residency checks, blacklist rules.
  • Audit trail store
    • Persists prompts, retrieved documents, model outputs, and final decisions for compliance review.
  • Human review handoff
    • Routes borderline or incomplete cases to an underwriter instead of auto-declining.

Implementation

1) Define the underwriting schema

Keep the output structured. For lending, free-text answers are useless unless they can be validated against policy and stored in an audit log.

import { z } from "zod";

export const UnderwritingDecisionSchema = z.object({
  decision: z.enum(["approve", "decline", "refer"]),
  riskGrade: z.enum(["A", "B", "C", "D"]),
  reasons: z.array(z.string()).min(1),
  missingFields: z.array(z.string()),
  policyFlags: z.array(z.string()),
});

export type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;

export const ApplicantSchema = z.object({
  applicantId: z.string(),
  country: z.string(),
  monthlyIncome: z.number(),
  monthlyDebtPayments: z.number(),
  requestedAmount: z.number(),
  employmentStatus: z.enum(["employed", "self_employed", "unemployed"]),
});

2) Build the LangChain prompt and model call

Use ChatOpenAI with structured output so the model returns something you can validate. This is the right pattern for lending because you need deterministic downstream handling.

import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { PromptTemplate } from "@langchain/core/prompts";
import { UnderwritingDecisionSchema } from "./schema.js";

const model = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const underwritingPrompt = PromptTemplate.fromTemplate(`
You are a lending underwriting assistant.

Applicant:
{applicant}

Policy context:
{policy}

Return a decision that follows the schema exactly.
Use only the supplied policy context.
If critical information is missing, choose "refer".
`);

export async function underwrite(applicant: unknown, policyText: string) {
  const prompt = await underwritingPrompt.format({
    applicant: JSON.stringify(applicant),
    policy: policyText,
  });

  const response = await model.withStructuredOutput(UnderwritingDecisionSchema).invoke([
    new HumanMessage(prompt),
  ]);

  return response;
}

3) Add hard rules before finalizing the decision

Do not let the LLM override lending policy. The model should recommend; your rules layer should enforce.

import { ApplicantSchema } from "./schema.js";
import { underwrite } from "./underwrite.js";

function applyHardRules(applicant: {
  country: string;
  monthlyIncome: number;
  monthlyDebtPayments: number;
}) {
  const dti = applicant.monthlyDebtPayments / Math.max(applicant.monthlyIncome, 1);

  if (applicant.country !== "KE" && applicant.country !== "UG") {
    return { decisionOverride: "refer" as const, flag: "unsupported_residency" };
  }

  if (dti > 0.45) {
    return { decisionOverride: "decline" as const, flag: "dti_above_threshold" };
  }

  return null;
}

export async function runUnderwriting(rawApplicant: unknown, policyText: string) {
  const applicant = ApplicantSchema.parse(rawApplicant);
  const ruleResult = applyHardRules(applicant);

  if (ruleResult) {
    return {
      decision: ruleResult.decisionOverride,
      riskGrade: "D" as const,
      reasons: [ruleResult.flag],
      missingFields: [],
      policyFlags: [ruleResult.flag],
    };
    }

  const llmDecision = await underwrite(applicant, policyText);
  return llmDecision;
}

4) Store every decision for auditability

In lending, you need to reconstruct why a decision was made months later. Persist input data hashes, prompt text, retrieved policy version, and final output.

type AuditRecord = {
  applicantId: string;
  policyVersion: string;
  inputHash: string;
};

export async function writeAuditLog(
  record: AuditRecord,
  decision: unknown
): Promise<void> {
  console.log(JSON.stringify({ record, decision }, null, 2));
}

Production Considerations

  • Enforce data residency

    Keep applicant PII in-region. If your lending stack serves multiple jurisdictions, pin model inference and vector stores to approved regions only.

  • Separate recommendations from approvals

    The agent should never directly book disbursements or finalize legal acceptance. Use it to recommend approve, decline, or refer, then pass approved cases through deterministic workflow steps.

  • Monitor drift by segment

    Track approval rates, refer rates, false declines, and delinquency outcomes by product type, geography, income band, and employment status. A good underwriting agent can still fail badly on one borrower segment.

  • Log full trace context

    Capture prompt versioning, retrieved policy snippets, model name, temperature, and rule overrides. This is what compliance teams will ask for during reviews.

Common Pitfalls

  1. Letting the LLM make final credit decisions

    • Fix it by keeping all threshold-based logic in code. Use LangChain for classification and extraction; use your rules engine for eligibility.
  2. Skipping structured outputs

    • If you accept raw text from the model, you will spend time parsing inconsistent responses and debugging silent failures. Use withStructuredOutput() plus Zod validation every time.
  3. Ignoring jurisdiction-specific constraints

    • Lending policies differ by country on KYC scope, adverse action notices, retention periods, and data transfer rules. Add residency checks and product-specific policies before any model call.
  4. No audit trail

    • If you cannot show which policy version influenced a decline or referral decision via PromptTemplate content and stored outputs, the agent is not production-ready for regulated lending.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides