How to Build a underwriting Agent Using CrewAI in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
underwritingcrewaitypescriptbanking

An underwriting agent in banking takes a loan or credit application, gathers the right facts, checks policy constraints, and produces a decision package for a human underwriter or an automated approval flow. It matters because the bank needs faster decisions without losing control over compliance, auditability, and risk policy enforcement.

Architecture

  • Application intake layer

    • Normalizes borrower data from CRM, LOS, PDFs, and API payloads.
    • Validates required fields before any agent work starts.
  • Policy retrieval layer

    • Pulls underwriting rules from an internal knowledge base or policy store.
    • Keeps model behavior anchored to approved bank policy, not free-form reasoning.
  • Risk analysis agent

    • Reviews income, debt ratios, collateral, KYC flags, and exposure limits.
    • Produces structured risk findings with citations to source data.
  • Compliance review agent

    • Checks AML/KYC gaps, adverse action triggers, fair lending constraints, and missing disclosures.
    • Flags cases that require manual review.
  • Decision orchestration layer

    • Combines outputs from multiple agents into a single underwriting memo.
    • Enforces approval thresholds and escalation rules.
  • Audit and logging layer

    • Stores prompts, tool calls, retrieved policy snippets, and final outputs.
    • Supports internal audit, model governance, and regulator requests.

Implementation

1) Install CrewAI and define your underwriting inputs

For TypeScript, keep the input contract strict. Banking workflows fail when the agent gets sloppy JSON.

// package.json dependencies:
// {
//   "dependencies": {
//     "crewai": "^0.1.0",
//     "zod": "^3.23.8",
//     "dotenv": "^16.4.5"
//   }
// }

import { z } from "zod";

const UnderwritingInputSchema = z.object({
  applicantId: z.string(),
  annualIncome: z.number().positive(),
  monthlyDebt: z.number().nonnegative(),
  requestedAmount: z.number().positive(),
  creditScore: z.number().int().min(300).max(850),
  kycStatus: z.enum(["passed", "pending", "failed"]),
  country: z.string(),
});

export type UnderwritingInput = z.infer<typeof UnderwritingInputSchema>;

2) Create agents with explicit banking roles

Use separate agents for risk and compliance. That separation makes it easier to test controls and explain decisions later.

import { Agent } from "crewai";

export const riskAgent = new Agent({
  role: "Senior Credit Risk Analyst",
  goal: "Assess repayment capacity and identify credit risk factors using bank policy.",
  backstory:
    "You are a banking credit analyst who writes concise underwriting memos with evidence.",
});

export const complianceAgent = new Agent({
  role: "Bank Compliance Officer",
  goal: "Detect AML/KYC gaps, policy violations, and adverse action risks.",
  backstory:
    "You review applications against bank policy and regulatory constraints.",
});

3) Define tasks that force structured output

The key pattern is to ask for machine-readable output. In production, your downstream system should not parse prose to decide approve/decline.

import { Task } from "crewai";

export function buildTasks(input: UnderwritingInput) {
  const riskTask = new Task({
    description: `
Analyze this loan application for repayment risk.

Applicant:
${JSON.stringify(input, null, 2)}

Return JSON with:
- debtToIncomeRatio
- riskGrade (LOW|MEDIUM|HIGH)
- keyDrivers (array of strings)
- recommendation (APPROVE|REFER|DECLINE)
`,
    expectedOutput: "Strict JSON underwriting risk assessment.",
    agent: riskAgent,
    asyncExecution: false,
  });

  const complianceTask = new Task({
    description: `
Check this application for banking compliance issues.

Applicant:
${JSON.stringify(input, null, 2)}

Return JSON with:
- kycIssue (true|false)
- amlFlag (true|false)
- fairLendingConcern (true|false)
- escalationRequired (true|false)
- notes (array of strings)
`,
    expectedOutput: "Strict JSON compliance assessment.",
    agent: complianceAgent,
    asyncExecution: false,
  });

  return { riskTask, complianceTask };
}

4) Orchestrate the crew and enforce decision rules in code

This is where you keep the bank in control. The agent suggests; your deterministic logic decides whether the case can auto-route or must escalate.

import { Crew } from "crewai";
import { UnderwritingInputSchema } from "./schema";
import { buildTasks } from "./tasks";

export async function underwrite(rawInput: unknown) {
  const input = UnderwritingInputSchema.parse(rawInput);
  const { riskTask, complianceTask } = buildTasks(input);

  const crew = new Crew({
    agents: [riskTask.agent!, complianceTask.agent!],
    tasks: [riskTask, complianceTask],
    verbose: true,
    process: "sequential",
  });

  const result = await crew.kickoff();

  // In production parse each task output separately if your CrewAI version exposes task outputs.
  // Keep final approval logic outside the LLM.
  return {
    applicantId: input.applicantId,
    crewResult: result,
    decisionRule:
      input.creditScore >= 680 &&
      input.kycStatus === "passed" &&
      input.requestedAmount <= input.annualIncome * .35
        ? "AUTO_APPROVE"
        : "MANUAL_REVIEW",
    auditMeta: {
      modelWorkflow: "CrewAI underwriting v1",
      residencyRegion: process.env.DATA_RESIDENCY_REGION ?? "unknown",
    },
  };
}

Production Considerations

  • Keep decisioning deterministic

    • Use CrewAI for analysis and summarization.
    • Use hard-coded policy thresholds in TypeScript for final approve/decline routing.
  • Log everything needed for audit

    • Store prompts, retrieved policy text versions, task outputs, timestamps, user IDs, and model version.
  • Respect data residency

    • Route sensitive borrower data to approved regions only.
    • Avoid sending raw PII to non-approved vendors or cross-border inference endpoints.
  • Add guardrails before execution

    • Redact SSNs, account numbers, and tax IDs before tasks run.
    • Block outputs that contain unsupported advice like pricing overrides or prohibited basis decisions.

Common Pitfalls

  • Letting the LLM make the final credit decision

    This is a governance problem. The model should produce evidence; your policy engine should make the final call.

  • Using one generic agent for everything

    Risk analysis and compliance review need different prompts and failure modes. Split them so you can test each control independently.

  • Skipping structured output validation

    If you accept free-form text, your downstream workflow becomes brittle. Validate task output against a schema before storing or acting on it.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides