How to Build a underwriting Agent Using LangGraph in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
underwritinglanggraphtypescriptpayments

An underwriting agent for payments decides whether a merchant, transaction flow, or payout request should be approved, held for review, or rejected. In practice, it reduces manual review load while keeping compliance, fraud, and risk controls in the loop.

For payments teams, this matters because bad underwriting decisions create chargebacks, compliance exposure, and processor losses. A good agent does not “auto-approve everything”; it applies policy, records evidence, and routes edge cases to humans.

Architecture

  • Input normalization layer
    • Converts raw merchant application data, KYB fields, transaction metadata, and device/IP signals into a stable state shape.
  • Risk scoring node
    • Computes a structured risk assessment from rules plus model output.
    • Keeps the score explainable for audit and operations.
  • Policy gate node
    • Applies hard business rules:
      • sanctioned geography
      • prohibited MCCs
      • missing beneficial ownership
      • velocity anomalies
  • Decision node
    • Produces one of three outcomes:
      • approve
      • review
      • reject
  • Audit logging node
    • Persists the full decision trace, inputs used, policy hits, and final outcome.
  • Human review handoff
    • Routes borderline cases to an ops queue with the exact reason the agent paused.

Implementation

1) Define the state and decision contract

Keep the graph state explicit. Underwriting systems fail when state is vague and every node invents its own schema.

import { Annotation } from "@langchain/langgraph";

export type UnderwritingDecision = "approve" | "review" | "reject";

export type UnderwritingState = {
  applicantId: string;
  merchantName: string;
  country: string;
  mcc: string;
  monthlyVolumeUsd: number;
  chargebackRate?: number;
  kybComplete: boolean;
  sanctionsHit: boolean;
  riskScore?: number;
  decision?: UnderwritingDecision;
  reasons?: string[];
};

export const State = Annotation.Root({
  applicantId: Annotation<string>(),
  merchantName: Annotation<string>(),
  country: Annotation<string>(),
  mcc: Annotation<string>(),
  monthlyVolumeUsd: Annotation<number>(),
  chargebackRate: Annotation<number | undefined>(),
  kybComplete: Annotation<boolean>(),
  sanctionsHit: Annotation<boolean>(),
  riskScore: Annotation<number | undefined>(),
  decision: Annotation<UnderwritingDecision | undefined>(),
  reasons: Annotation<string[] | undefined>(),
});

2) Build deterministic nodes first

Use deterministic checks before any model call. For payments underwriting, hard rules should always win over probabilistic output.

import { StateGraph, START, END } from "@langchain/langgraph";

const scoreRisk = async (state: typeof State.State): Promise<Partial<typeof State.State>> => {
  let score = 0;

  if (!state.kybComplete) score += 40;
  if (state.sanctionsHit) score += 100;
  if (state.country !== "US") score += 10;
  if ((state.chargebackRate ?? 0) > 0.03) score += 25;
  if (state.monthlyVolumeUsd > 100000) score += Math.min(20, state.monthlyVolumeUsd / 10000);

  return { riskScore: Math.min(100, score) };
};

const applyPolicy = async (state: typeof State.State): Promise<Partial<typeof State.State>> => {
const reasons: string[] = [];

if (!state.kybComplete) reasons.push("KYB incomplete");
if (state.sanctionsHit) reasons.push("Sanctions screening hit");
if ((state.chargebackRate ?? 0) > 0.03) reasons.push("Chargeback rate above threshold");

let decision: UnderwritingDecision = "approve";
if (state.sanctionsHit || !state.kybComplete) decision = "reject";
else if ((state.riskScore ?? 0) >= 40) decision = "review";

if (decision === "approve" && reasons.length > 
0) reasons.push("No blocking policy violations");

return { decision, reasons };
};

  1. Add a routing function for human review

LangGraph’s conditional edges are the right fit here. You want a clean branch between auto-decision and manual ops review.

const routeAfterPolicy = (state: typeof State.State): "humanReview" | typeof END => {
	if (state.decision === "review") return "humanReview";
	return END;
};

const humanReview = async (
	state: typeof State.State
): Promise<Partial<typeof State.State>> => {
	// Replace with your queue write:
	// e.g. Kafka topic, SQS message, or internal case management API.
	console.log("Routing to human review:", {
		applicantId: state.applicantId,
		reasons: state.reasons,
		riskScore: state.riskScore,
	});

	return {};
};

  1. Compile and run the graph

This is the actual LangGraph pattern you deploy in TypeScript services.

const graph = new StateGraph(State)
	.addNode("scoreRisk", scoreRisk)
	.addNode("applyPolicy", applyPolicy)
	.addNode("humanReview", humanReview)
	.addEdge(START, "scoreRisk")
	.addEdge("scoreRisk", "applyPolicy")
	.addConditionalEdges("applyPolicy", routeAfterPolicy)
	.addEdge("humanReview", END);

export const underwritingApp = graph.compile();

async function main() {
	const result = await underwritingApp.invoke({
		applicantId: "app_123",
		merchantName: "Northwind Payments",
		country: "US",
		mcc: "6012",
		mmonthlyVolumeUsd? : not used
		monthlyVolumeUsd: 
50000,
		chargebackRate:
0.012,
		kybComplete:
true,
		sanctionsHit:
false,
	});

	console.log(result);
}

main();

Production Considerations

  • Keep hard controls outside model output

    Sanctions hits, prohibited MCCs, KYB completeness, and residency constraints should be deterministic checks. The model can explain or summarize risk; it should not override policy.

  • Log every decision path

    Persist input hashes, policy flags, final outcome, timestamp, graph version, and reviewer ID if escalated. Payments teams need this for audit trails and dispute investigations.

  • Respect data residency

    If merchant PII or bank account details must stay in-region, deploy the graph runtime and any vector store or telemetry sink in that region too. Do not ship raw KYB documents to a cross-border LLM endpoint.

  • Monitor drift on approval/review ratios

    Track reject rate by MCC, geography, processor BIN range, and onboarding source. Sudden shifts usually mean upstream data quality issues or a broken rule.

Common Pitfalls

  • Letting the LLM make final approve/reject calls

    That is how you get inconsistent decisions and weak auditability. Use the model for explanation or summarization only; keep policy evaluation deterministic.

  • Skipping reason codes

    A bare reject is useless to operations and compliance teams. Always return structured reasons like KYB incomplete, sanctions hit, or chargeback threshold exceeded.

  • Ignoring replayability

    If you cannot rerun the same application through the same graph version with the same inputs, your audit story falls apart. Version your prompts/rules/code together and store the exact graph release used for each decision.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides