How to Build a underwriting Agent Using LangChain in TypeScript for retail banking

By Cyprian AaronsUpdated 2026-04-21
underwritinglangchaintypescriptretail-banking

An underwriting agent in retail banking takes borrower data, runs it through policy rules, enriches it with internal and external signals, and returns a decision recommendation with reasons. It matters because loan ops teams need faster turnaround without losing control over compliance, auditability, and consistent credit policy enforcement.

Architecture

Build this agent as a small workflow, not a single prompt.

  • Input normalization layer

    • Converts application payloads into a stable schema.
    • Validates required fields like income, employment status, DTI, and requested amount.
  • Policy retrieval layer

    • Pulls underwriting rules from approved bank documents.
    • Keeps policy text versioned so every decision can be traced to the exact rule set.
  • Tooling layer

    • Exposes controlled tools for affordability checks, bureau lookups, KYC status, and product eligibility.
    • Each tool returns structured data, not free-form text.
  • Reasoning layer

    • Uses LangChain to combine application data, retrieved policy context, and tool outputs.
    • Produces a recommendation: approve, refer, or decline.
  • Audit logging layer

    • Stores the full input, model output, tool calls, and policy version used.
    • Supports post-decision review and regulatory evidence.
  • Guardrail layer

    • Blocks unsupported decisions outside policy thresholds.
    • Enforces human review for edge cases and incomplete data.

Implementation

1) Define the underwriting schema and the model contract

Keep the agent output structured. In retail banking, you want deterministic fields that downstream systems can consume.

import { z } from "zod";

export const UnderwritingInputSchema = z.object({
  applicantId: z.string(),
  annualIncome: z.number().positive(),
  monthlyDebtPayments: z.number().nonnegative(),
  requestedAmount: z.number().positive(),
  employmentStatus: z.enum(["employed", "self_employed", "unemployed"]),
  country: z.string(),
});

export const UnderwritingDecisionSchema = z.object({
  decision: z.enum(["approve", "refer", "decline"]),
  riskGrade: z.enum(["A", "B", "C", "D"]),
  reasons: z.array(z.string()),
  policyReferences: z.array(z.string()),
});

export type UnderwritingInput = z.infer<typeof UnderwritingInputSchema>;
export type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;

2) Add controlled tools for bank checks

Use tools only for bounded operations. Don’t let the model invent numbers or call arbitrary endpoints.

import { tool } from "@langchain/core/tools";
import { z } from "zod";

export const calculateDTI = tool(
  async ({ annualIncome, monthlyDebtPayments }) => {
    const monthlyIncome = annualIncome / 12;
    const dti = monthlyDebtPayments / monthlyIncome;
    return JSON.stringify({ dti });
  },
  {
    name: "calculate_dti",
    description: "Calculate debt-to-income ratio for a retail banking applicant",
    schema: z.object({
      annualIncome: z.number().positive(),
      monthlyDebtPayments: z.number().nonnegative(),
    }),
  }
);

export const checkPolicyThresholds = tool(
  async ({ dti }) => {
    const maxDTI = 0.45;
    return JSON.stringify({
      pass: dti <= maxDTI,
      maxDTI,
      ruleId: "UW-DTI-001",
    });
  },
  {
    name: "check_policy_thresholds",
    description: "Check whether applicant meets core underwriting thresholds",
    schema: z.object({
      dti: z.number().min(0).max(10),
    }),
  }
);

3) Build the LangChain agent with createOpenAIToolsAgent

This pattern keeps reasoning in the LLM while restricting actions to your approved tools. Use ChatOpenAI, PromptTemplate, AgentExecutor, and createOpenAIToolsAgent.

import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { HumanMessage } from "@langchain/core/messages";
import { UnderwritingInputSchema } from "./schemas";
import { calculateDTI, checkPolicyThresholds } from "./tools";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a retail banking underwriting assistant.
Return only decisions supported by policy.
If data is missing or ambiguous, choose refer.
Cite policy references in your output.`,
  ],
]);

async function runUnderwriting(inputRaw: unknown) {
  const input = UnderwritingInputSchema.parse(inputRaw);

   const agent = await createOpenAIToolsAgent({
    llm,
    tools: [calculateDTI, checkPolicyThresholds],
    prompt,
   });

   const executor = new AgentExecutor({
     agent,
     tools: [calculateDTI, checkPolicyThresholds],
   });

   const result = await executor.invoke({
     input: JSON.stringify(input),
     messages: [new HumanMessage(JSON.stringify(input))],
   });

   return result.output;
}

4) Wrap the agent with validation and audit logging

The model should never be the source of truth. Validate its output and persist everything needed for review.

import { UnderwritingDecisionSchema } from "./schemas";

async function decide(inputRaw: unknown) {
  const outputText = await runUnderwriting(inputRaw);

  const parsed = JSON.parse(outputText);
   const decision = UnderwritingDecisionSchema.parse(parsed);

   await auditLog({
     applicantId: parsed.applicantId ?? "unknown",
     requestPayload: inputRaw,
     responsePayload: decision,
     policyVersion: "2026-04-01",
     modelName: "gpt-4o-mini",
   });

   return decision;
}

async function auditLog(record: unknown) {
   console.log("AUDIT_EVENT", JSON.stringify(record));
}

Production Considerations

  • Data residency

    • Keep PII inside approved regions and use a provider configuration that matches your bank’s residency requirements.
    • Redact account numbers, SSNs, and full bureau records before sending context to the model.
  • Compliance and explainability

    • Every recommendation needs a reason trail tied to policy IDs.
    • Store prompt version, tool outputs, model version, and final decision for audit teams and regulators.
  • Human-in-the-loop controls

    • Route borderline cases to manual review when DTI is near threshold or income verification is incomplete.
    • Hard-block auto-decline if the case requires adverse action notice language or protected-class-sensitive logic.
  • Monitoring

    • Track approval rates by segment, refer rates, tool failure rates, and output schema violations.
    • Alert on drift when the agent starts over-referring or producing unsupported reasons.

Common Pitfalls

  1. Letting the LLM make numeric decisions directly

    • Avoid this by calculating ratios in tools or backend code.
    • The model should interpret results, not compute them from scratch.
  2. Skipping structured output validation

    • If you don’t validate with Zod or equivalent schemas, malformed decisions will leak into downstream systems.
    • Parse every response before persisting or executing it.
  3. Mixing policy text with raw customer PII

    • This creates unnecessary exposure and makes audits harder.
    • Retrieve only the minimum policy context needed for the current case and redact sensitive fields before prompting.
  4. No versioning on rules or prompts

    • A production underwriting system needs reproducibility.
    • Version prompts, policies, tools, and models together so you can explain why a loan was approved last month but referred today.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides