How to Build a underwriting Agent Using LangChain in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
underwritinglangchaintypescriptpayments

An underwriting agent for payments reviews merchant applications, transaction behavior, and risk signals, then returns a decision: approve, reject, or route to manual review. It matters because bad underwriting means chargebacks, fraud exposure, compliance failures, and wasted ops time.

Architecture

  • Application intake
    • Receives merchant profile data: business type, geography, volume estimates, ownership, MCC, and processing history.
  • Risk signal fetchers
    • Pulls internal and external data such as sanctions checks, fraud scores, chargeback history, KYB/KYC results, and device or IP reputation.
  • LangChain decision chain
    • Combines structured inputs with a prompt that produces a constrained underwriting recommendation.
  • Policy engine
    • Applies hard rules outside the model: prohibited industries, country restrictions, threshold limits, and compliance blocks.
  • Audit logger
    • Stores every input signal, model output, prompt version, and final decision for review and regulator traceability.
  • Human review queue
    • Captures borderline cases where the model confidence is low or policy checks require manual approval.

Implementation

1. Define the underwriting input and output contracts

Keep the model input structured. For payments workflows, you want deterministic fields for compliance and auditability.

import { z } from "zod";

export const UnderwritingInputSchema = z.object({
  merchantName: z.string(),
  country: z.string(),
  industry: z.string(),
  monthlyVolumeUsd: z.number(),
  avgTicketUsd: z.number(),
  chargebackRate: z.number().optional(),
  kybStatus: z.enum(["passed", "failed", "pending"]),
  sanctionsHit: z.boolean(),
  notes: z.string().optional(),
});

export const UnderwritingDecisionSchema = z.object({
  decision: z.enum(["approve", "reject", "manual_review"]),
  riskScore: z.number().min(0).max(100),
  reasons: z.array(z.string()),
});

2. Build the LangChain chain with structured output

Use ChatOpenAI, ChatPromptTemplate, and RunnableSequence. The key pattern is to force a schema-backed response with withStructuredOutput.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { UnderwritingInputSchema, UnderwritingDecisionSchema } from "./schemas";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a payments underwriting agent.
Return only decisions that fit policy.
Reject if sanctionsHit is true or KYB failed.
Use manual_review for incomplete or ambiguous cases.`,
  ],
  [
    "human",
    `Merchant data:
{merchantJson}

Evaluate risk for payment processing underwriting.`,
  ],
]);

const underwritingChain = RunnableSequence.from([
  async (input: unknown) => {
    const parsed = UnderwritingInputSchema.parse(input);
    return {
      merchantJson: JSON.stringify(parsed),
      parsed,
    };
  },
  prompt,
  model.withStructuredOutput(UnderwritingDecisionSchema),
]);

export async function underwriteMerchant(input: unknown) {
  const result = await underwritingChain.invoke(input);
  return result;
}

This gives you two important properties:

  • Schema validation before the LLM runs
  • Structured output after the LLM runs

That combination is what keeps payment decisions auditable instead of free-form.

3. Add hard policy gates before the model

Do not let the LLM override compliance blocks. For payments, rules like sanctions hits or unsupported geographies should short-circuit immediately.

type PolicyResult =
  | { blocked: true; decision: "reject"; reason: string }
  | { blocked: false };

function applyHardPolicy(input: {
  country: string;
  sanctionsHit?: boolean;
}) : PolicyResult {
  if (input.sanctionsHit) {
    return { blocked: true, decision: "reject", reason: "Sanctions hit" };
    }
  
    const restrictedCountries = ["IR", "KP", "SY"];
    if (restrictedCountries.includes(input.country)) {
      return { blocked: true, decision: "reject", reason: "Restricted country" };
    }

    return { blocked: false };
}

export async function underwriteWithPolicy(input: unknown) {
  const parsed = UnderwritingInputSchema.parse(input);
  const policy = applyHardPolicy(parsed);

  if (policy.blocked) {
    return {
      decision: policy.decision,
      riskScore: 100,
      reasons: [policy.reason],
    };
  }

  return underwriteMerchant(parsed);
}

4. Persist audit logs for every decision

For regulated payments systems, log inputs, outputs, prompt version hashes, and timestamps. If you cannot reproduce the decision later, you do not have an auditable system.

import crypto from "crypto";

function hashPayload(payload: unknown) {
  return crypto.createHash("sha256").update(JSON.stringify(payload)).digest("hex");
}

export async function auditedUnderwrite(input: unknown) {
  const requestHash = hashPayload(input);
  
   const result = await underwriteWithPolicy(input);

   const auditRecord = {
     requestHash,
     decision: result.decision,
     riskScore: result.riskScore,
     reasonsHash: hashPayload(result.reasons),
     timestampUtc: new Date().toISOString(),
     modelVersion: "gpt-4o-mini",
   };

   console.log(JSON.stringify(auditRecord));
   return result;
}

Production Considerations

  • Deployment isolation
    • Keep underwriting services in a private network segment with strict egress controls. Payments data often contains PII and KYB details that should not leave approved regions.
  • Monitoring
    • Track approval rate, manual review rate, sanction-block rate, false positives on good merchants, and drift in risk scores by geography or MCC.
  • Guardrails
    • Use hard-coded compliance rules outside the model for sanctions, prohibited industries, and residency constraints. The LLM should recommend; policy should decide where required.
  • Data residency
    • Store merchant records and audit logs in-region if your regulatory regime requires it. Make sure your embedding store or vector DB does not replicate sensitive data across jurisdictions.

Common Pitfalls

  • Letting the model make final compliance calls

    If sanctions screening or country restrictions are embedded only in prompts, one bad response can create a legal problem. Put those checks in deterministic code before any LLM call.

  • Using unstructured outputs

    Free-form text is hard to validate and impossible to reliably audit at scale. Always use withStructuredOutput with a Zod schema so downstream systems get predictable fields.

  • Ignoring prompt/version traceability

    If you do not record prompt text versions and model versions alongside each decision, you cannot explain why a merchant was approved or rejected later. Store hashes of prompts plus the exact model name in your audit trail.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides