How to Build a underwriting Agent Using CrewAI in TypeScript for payments
An underwriting agent for payments evaluates a merchant, transaction, or payout request against risk rules before money moves. In practice, it reduces manual review load, speeds up approvals, and gives you a consistent decision trail for compliance and audit.
Architecture
- •
Input normalizer
- •Takes merchant profile, transaction metadata, KYB/KYC signals, and jurisdiction.
- •Converts messy upstream payloads into a strict schema before the agent sees anything.
- •
Risk policy tool
- •Encapsulates underwriting rules: MCC restrictions, chargeback thresholds, velocity checks, sanctions flags, and country restrictions.
- •Keeps policy outside the LLM prompt so you can version it and test it.
- •
Evidence retrieval layer
- •Pulls facts from internal systems: merchant history, disputes, settlement delays, prior manual reviews.
- •The agent should cite these facts in its decision output.
- •
Underwriting crew
- •A small CrewAI crew with a research-oriented agent and a decision agent.
- •One agent gathers evidence; the other writes the final recommendation.
- •
Decision store / audit log
- •Persists inputs, retrieved evidence, model output, and final decision.
- •Required for payments auditability and post-incident review.
- •
Human review queue
- •Routes borderline cases to an analyst.
- •Prevents auto-approval on incomplete or low-confidence cases.
Implementation
1) Install dependencies and define your underwriting schema
Use crewai plus a schema library like zod so you can reject malformed payment requests before they hit the model.
npm install crewai zod
Create a strict request shape. Payments systems fail when “merchant” becomes an untyped blob.
import { z } from "zod";
export const UnderwriteRequestSchema = z.object({
merchantId: z.string(),
legalName: z.string(),
country: z.string().length(2),
mcc: z.string(),
monthlyVolumeUsd: z.number().nonnegative(),
averageTicketUsd: z.number().nonnegative(),
chargebackRate: z.number().min(0).max(1),
sanctionsHit: z.boolean(),
kybVerified: z.boolean(),
});
export type UnderwriteRequest = z.infer<typeof UnderwriteRequestSchema>;
2) Build tools for policy checks and evidence lookup
CrewAI works best when the LLM is not guessing at core risk facts. Put deterministic payment controls in tools.
import { Tool } from "crewai";
const policyTool = new Tool({
name: "payment_policy_check",
description:
"Checks underwriting policy for payments merchants using deterministic rules.",
func: async (input: string) => {
const req = JSON.parse(input);
const reasons: string[] = [];
let decision: "approve" | "review" | "decline" = "approve";
if (!req.kybVerified) {
decision = "review";
reasons.push("KYB not verified");
}
if (req.sanctionsHit) {
decision = "decline";
reasons.push("Sanctions hit");
}
if (req.chargebackRate > 0.08) {
decision = "review";
reasons.push("Chargeback rate above threshold");
}
if (req.country !== "US" && req.monthlyVolumeUsd > 500000) {
decision = "review";
reasons.push("Cross-border high-volume merchant requires manual review");
}
return JSON.stringify({ decision, reasons });
},
});
const evidenceTool = new Tool({
name: "merchant_evidence_lookup",
description:
"Fetches historical merchant risk data for underwriting decisions.",
func: async (merchantId: string) => {
return JSON.stringify({
merchantId,
priorReviews: 2,
priorDeclines: 1,
disputeRatioLast90d: 0.031,
avgSettlementDays: 4,
residencyRegion: "eu-west-1",
});
},
});
3) Create the CrewAI agents and tasks
Use one agent to gather evidence and another to produce the final underwriting recommendation with an audit-friendly rationale.
import { Agent, Crew, Process, Task } from "crewai";
const evidenceAgent = new Agent({
role: "Payments Risk Analyst",
goal:
"Collect relevant underwriting evidence for a merchant without making unsupported assumptions.",
});
const underwriterAgent = new Agent({
role: "Underwriting Decision Maker",
goal:
"Produce a compliant underwriting recommendation for payments using only supplied evidence and policy results.",
});
const gatherEvidenceTask = new Task({
description:
"Look up merchant evidence using the merchant_evidence_lookup tool and summarize only factual findings.",
});
const decideTask = new Task({
description:
`Review the policy result and evidence. Return JSON with:
decision (approve|review|decline),
reason,
required_controls,
audit_notes.`,
});
export async function underwritePaymentMerchant(requestJson: string) {
const crew = new Crew({
agents: [evidenceAgent, underwriterAgent],
tasks: [gatherEvidenceTask, decideTask],
process: Process.sequential,
tools: [policyTool, evidenceTool],
verbose: true,
});
const result = await crew.kickoff({ input: requestJson });
}
If you want the full pattern in one place
The key is to run deterministic policy first, then let CrewAI explain the outcome with supporting evidence.
import { UnderwriteRequestSchema } from "./schema";
import { Agent, Crew, Process, Task } from "crewai";
export async function underwritePaymentMerchant(requestBody: unknown) {
const request = UnderwriteRequestSchema.parse(requestBody);
const crewInput = JSON.stringify(request);
const crew = new Crew({
agents: [
new Agent({
role: "Payments Risk Analyst",
goal:
"Collect factual underwriting evidence for a payment merchant.",
}),
new Agent({
role: "Underwriting Decision Maker",
goal:
"Issue a compliant underwriting recommendation with audit notes.",
}),
],
tasks: [
new Task({
description:
`Run payment_policy_check on the input and summarize policy findings.`,
}),
new Task({
description:
`Use all available evidence to return JSON with decision, reason, required_controls, audit_notes.`,
}),
],
process: Process.sequential,
verbose: true,
});
const result = await crew.kickoff({ input: crewInput });
return result;
}
Note on deployment shape
In production you would keep policyTool backed by your risk service and evidenceTool backed by read-only APIs. The LLM should never be the source of truth for sanctions status or compliance thresholds.
Production Considerations
- •
Audit logging
- •Persist raw input, normalized input, tool outputs, task outputs, model version, and final action.
- •In payments disputes or regulator reviews, you need to reconstruct why a merchant was approved or declined.
- •
Data residency
- •Keep EU merchant data in EU-hosted infrastructure if your obligations require it.
- •Do not send PII or KYB documents across regions unless your legal/compliance posture explicitly allows it.
- •
Guardrails
- •Force structured output like JSON with an allowlist of decisions.
- •Reject free-form approvals; every approval should include reason codes and controls.
- •
Monitoring
- •Track approval rate, manual-review rate, false declines, sanctions-hit overrides ignored by humans.
- •Alert when model behavior drifts away from deterministic policy outcomes.
Common Pitfalls
- •
Letting the model decide core compliance facts
- •Don’t ask the LLM whether a merchant is sanctioned or KYB verified.
- •Those values must come from authoritative systems through tools.
- •
No schema validation at ingress
- •If your payload is untyped junk, your agent will produce junk decisions.
- •Validate with
zodor similar before callingcrew.kickoff().
- •
Missing human review path
- •Payments underwriting has edge cases that should not be auto-approved.
- •Route borderline cases like high chargeback ratios or cross-border volume spikes into analyst review instead of forcing a binary answer.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit