How to Build a underwriting Agent Using AutoGen in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
underwritingautogentypescriptpayments

An underwriting agent for payments takes a merchant application, checks risk signals, and returns a decision with reasons the business can audit. In practice, that means faster onboarding, fewer manual reviews, and tighter control over fraud, chargebacks, and compliance exposure.

Architecture

Build this agent as a small workflow, not a single prompt.

  • Merchant intake service
    • Collects application data: legal entity, MCC, processing volume, countries, refund policy, chargeback history.
  • Risk retrieval layer
    • Pulls internal signals from KYC/KYB systems, sanctions screening, device intelligence, and prior processor history.
  • AutoGen agent group
    • One agent summarizes the case.
    • One agent evaluates policy rules.
    • One agent produces the final underwriting recommendation.
  • Decision store
    • Persists the decision, rationale, and evidence references for audit and dispute handling.
  • Policy engine
    • Enforces hard rules like prohibited MCCs, geography restrictions, velocity thresholds, and reserve requirements.
  • Observability layer
    • Logs prompts, tool calls, decisions, latency, and human overrides.

Implementation

1) Install AutoGen for TypeScript and define your risk model

Use the AutoGen TypeScript package and keep your underwriting payload explicit. Payments teams need structured inputs because free-form merchant descriptions are bad audit material.

npm install @autogenai/autogen
export type MerchantApplication = {
  merchantId: string;
  legalName: string;
  country: string;
  mcc: string;
  monthlyVolumeUsd: number;
  avgTicketUsd: number;
  refundRatePct: number;
  chargebackRatePct: number;
  sanctionsHit: boolean;
  kycStatus: "passed" | "failed" | "pending";
};

export type UnderwritingDecision = {
  decision: "approve" | "decline" | "manual_review";
  reasonCodes: string[];
  reservePct?: number;
};

2) Create an AutoGen assistant with tools for policy checks

For underwriting, let the model reason over evidence but keep deterministic rules in code. AutoGen’s AssistantAgent should call tools that you own.

import { AssistantAgent } from "@autogenai/autogen";

const policyRules = {
  blockedCountries: ["IR", "KP", "SY"],
  blockedMccs: ["4829", "5967"],
};

function evaluateHardRules(app: MerchantApplication) {
  const reasonCodes: string[] = [];

  if (policyRules.blockedCountries.includes(app.country)) {
    reasonCodes.push("BLOCKED_COUNTRY");
  }
  if (policyRules.blockedMccs.includes(app.mcc)) {
    reasonCodes.push("BLOCKED_MCC");
  }
  if (app.sanctionsHit) {
    reasonCodes.push("SANCTIONS_MATCH");
  }
  if (app.kycStatus !== "passed") {
    reasonCodes.push("KYC_NOT_PASSED");
  }

  return reasonCodes;
}

const underwritingAgent = new AssistantAgent({
  name: "underwriting_agent",
  systemMessage:
    "You underwrite payment merchants. Use provided evidence only. Return concise JSON with decision and reason codes.",
});

3) Run the agent with an explicit decision contract

The pattern here is simple: evaluate hard rules first, then let AutoGen write the final recommendation using the structured context. If any blocking rule fires, do not let the model override it.

import { UserProxyAgent } from "@autogenai/autogen";

export async function underwriteMerchant(app: MerchantApplication): Promise<UnderwritingDecision> {
  const hardRuleReasons = evaluateHardRules(app);

  if (hardRuleReasons.length > 0) {
    return {
      decision: "decline",
      reasonCodes: hardRuleReasons,
    };
    }

  
}

That snippet is intentionally incomplete because the real pattern is to feed the structured case into an AutoGen conversation and extract a bounded result. Here is the working version:

import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const underwritingAgent = new AssistantAgent({
  name: "underwriting_agent",
  systemMessage:
    "You underwrite payment merchants. Use only the supplied application data and policy notes. Return JSON with decision, reasonCodes, and optional reservePct.",
});

const userProxy = new UserProxyAgent({
  name: "policy_runner",
});

export async function underwriteMerchant(
  app: MerchantApplication
): Promise<UnderwritingDecision> {
  const hardRuleReasons = evaluateHardRules(app);

  
if (hardRuleReasons.length > 


0) {
    return { decision:

"decline", reasonCodes:
 hardRuleReasons };
  
}

  
const prompt = `
Merchant application:
${JSON.stringify(app)}

Policy guidance:
- Approve low-risk merchants with KYC passed and chargeback rate under

1%.
- Manual review for borderline volume or elevated refunds.
- Consider reserve for higher-risk but acceptable merchants.
`;

  
const result =
    await userProxy.initiateChat(
      underwritingAgent,
      prompt,
      { maxTurns:
1 }
    );

  
const text = result.chatHistory.at(-1)?.content ?? "{}";
  
return JSON.parse(text) as UnderwritingDecision;
}

  1. Add an audit trail before writing to your decision store

Payments teams need traceability. Store the input snapshot, model output, policy reasons, timestamp, and version of your underwriting rules.

type AuditRecord = {
  
merchantId:
string;
 
decision:
UnderwritingDecision;

applicationSnapshot:
MerchantApplication;
 
policyVersion:
string;

createdAt:
string;

};

async function persistAudit(record:
AuditRecord) {

  
// write to Postgres / DynamoDB / BigQuery / immutable log
  
console.log(JSON.stringify(record));
}

export async function underwriteAndPersist(app:
MerchantApplication) {

  
const decision =
await underwriteMerchant(app);

  
await persistAudit({
    
merchantId:
app.merchantId,
    
decision,
    
applicationSnapshot:

app,
    
policyVersion:

"2026-04-payments-v1",
    
createdAt:

new Date().toISOString(),
  
});

  
return decision;

}

Production Considerations

  • Enforce hard compliance gates outside the model
    • Sanctions hits, blocked geographies, prohibited MCCs, and KYC failures should short-circuit before any LLM output can influence the result.
  • Keep data residency explicit
    • If merchant data must stay in-region, run AutoGen against a deployment in that region and avoid sending raw PII or bank account details into non-compliant endpoints.
  • Log every decision path
    • Persist prompt input hashes, tool outputs, final JSON response, model version, and policy version. That gives you auditability when merchants challenge a decline.
  • Monitor drift on payment outcomes
    • Track approval rate by MCC, country corridor, average reserve percentage, manual review rate, chargeback rate after onboarding, and false positives on sanctions screening.

Common Pitfalls

  1. Letting the model make final compliance calls

    • Fix this by encoding prohibited categories in deterministic code first. The LLM should explain decisions, not override regulated controls.
  2. Using unstructured merchant text as primary input

    • Free-form descriptions hide risk. Normalize everything into typed fields like mcc, country, refundRatePct, and chargebackRatePct.
  3. Skipping versioning on policy and prompts

    • When an approved merchant later causes losses or disputes a decline, you need to know exactly which prompt template and rule set produced that outcome.
  4. Ignoring manual review thresholds

    • Not every case should be auto-approved or auto-declined. Route borderline cases to analysts when volume spikes fast or refund behavior is near your risk boundary.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides