How to Build a underwriting Agent Using LangChain in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
underwritinglangchaintypescriptfintech

An underwriting agent automates the first pass of credit or risk assessment: it gathers applicant data, checks policy rules, scores the application, and produces a decision recommendation with an audit trail. For fintech, that matters because you need faster approvals without losing control over compliance, explainability, and consistency.

Architecture

  • Input adapter

    • Normalizes applicant payloads from your API into a structured internal schema.
    • Handles KYC/KYB fields, income, liabilities, transaction summaries, and consent flags.
  • Policy retrieval layer

    • Pulls underwriting rules from a controlled source like a vector store or document store.
    • Keeps product terms, eligibility thresholds, and exclusions versioned.
  • LLM reasoning layer

    • Uses LangChain to summarize evidence and generate a recommendation.
    • Must be constrained to structured output, not free-form chat.
  • Decision engine

    • Applies deterministic checks for hard declines, manual review triggers, and score bands.
    • The model should assist, not replace, policy logic.
  • Audit and trace layer

    • Logs inputs, retrieved policy snippets, model outputs, and final decision.
    • Required for internal review, regulator requests, and dispute handling.
  • Data boundary controls

    • Enforces residency, PII redaction, and tenant isolation.
    • Prevents sensitive data from leaving approved regions or vendors.

Implementation

  1. Define the underwriting schema and model client

Use Zod to keep the output structured. In fintech workflows, this is non-negotiable because downstream systems need stable fields for decisions and audit.

import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
import { StructuredOutputParser } from "langchain/output_parsers";

const UnderwritingDecisionSchema = z.object({
  riskBand: z.enum(["LOW", "MEDIUM", "HIGH"]),
  decision: z.enum(["APPROVE", "MANUAL_REVIEW", "DECLINE"]),
  rationale: z.string(),
  missingInfo: z.array(z.string()).default([]),
  policyReferences: z.array(z.string()).default([]),
});

type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;

const parser = StructuredOutputParser.fromZodSchema(UnderwritingDecisionSchema);

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
  1. Load policy context and build the prompt

For production underwriting, the LLM should only reason over approved policy text. A retrieval step keeps your product rules current without hardcoding them into prompts.

import { PromptTemplate } from "@langchain/core/prompts";

const underwritingPrompt = new PromptTemplate({
  template: `
You are an underwriting assistant for a fintech lender.

Use only the policy context and applicant facts below.
If information is missing or contradictory, flag MANUAL_REVIEW.

Policy Context:
{policyContext}

Applicant Facts:
{applicantFacts}

Return your answer in this format:
{format_instructions}
`,
  inputVariables: ["policyContext", "applicantFacts"],
  partialVariables: {
    format_instructions: parser.getFormatInstructions(),
  },
});
  1. Create deterministic pre-checks before the LLM call

This is where you enforce hard business rules. If an applicant violates a non-negotiable rule, do not ask the model to “decide around it.”

interface Applicant {
  name: string;
  country: string;
  annualIncome: number;
  monthlyDebtPayments: number;
  consentGiven: boolean;
}

function precheck(applicant: Applicant) {
  if (!applicant.consentGiven) {
    return { decision: "DECLINE", reason: "Missing consent" as const };
  }

  const dti = applicant.monthlyDebtPayments / (applicant.annualIncome / 12);
  if (dti > 0.5) {
    return { decision: "MANUAL_REVIEW", reason: "Debt-to-income above threshold" as const };
    }

  return { decision: "PASS" as const };
}
  1. Run the agent workflow with LangChain

This pattern keeps the agent small and auditable. You can swap the policy source later without changing the control flow.

export async function underwriteApplicant(applicant: Applicant): Promise<UnderwritingDecision> {
  const gate = precheck(applicant);

  if (gate.decision !== "PASS") {
    return {
      riskBand: gate.decision === "DECLINE" ? "HIGH" : "MEDIUM",
      decision: gate.decision,
      rationale: gate.reason,
      missingInfo: [],
      policyReferences: ["precheck-rulebook-v1"],
    };
    }

  const policyContext = [
    "- Approve if income > $40k and DTI <= 35%",
    "- Manual review if income documentation is incomplete",
    "- Decline if sanctions screening is unresolved",
    "- Country must be in supported jurisdiction list",
  ].join("\n");

  const applicantFacts = JSON.stringify(
    {
      name: applicant.name,
      country: applicant.country,
      annualIncome: applicant.annualIncome,
      monthlyDebtPayments: applicant.monthlyDebtPayments,
      dtiRatio: Number((applicant.monthlyDebtPayments / (applicant.annualIncome / 12)).toFixed(2)),
    },
    null,
    2
  );

  const chain = underwritingPrompt.pipe(llm).pipe(parser);
  const result = await chain.invoke({ policyContext, applicantFacts });

  return result;
}

Production Considerations

  • Deploy in-region

    • Keep inference endpoints in the same jurisdiction as customer data.
    • If your bank operates in multiple regions, route requests by tenant and residency rules.
  • Log everything needed for audit

    • Store prompt version, policy version, retrieved documents, final output, and human overrides.
    • Use immutable logs so compliance teams can reconstruct every decision path.
  • Add guardrails before generation

    • Redact SSNs, account numbers, passport IDs, and full card data before sending text to the model.
  • Monitor drift and override rates

MetricWhy it matters
Manual review rateSignals bad thresholds or weak policy coverage
Override rateShows whether analysts disagree with model recommendations
Decline reason distributionHelps catch broken prompts or stale policies
Latency p95Impacts application completion rates

Common Pitfalls

  • Letting the model make final decisions

Avoid this by keeping approval logic in deterministic code. The LLM should explain and classify edge cases; it should not own credit policy.

  • Sending raw PII into prompts

Mask or tokenize sensitive fields before calling ChatOpenAI. For regulated fintech workloads, assume every prompt could be reviewed later by security or compliance.

  • Skipping versioning on policies and prompts

If you cannot reproduce a prior decision exactly, you do not have an auditable underwriting system. Version prompts, retrieval sources, thresholds, and model IDs together.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides