How to Build a KYC verification Agent Using CrewAI in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
kyc-verificationcrewaitypescriptfintech

A KYC verification agent automates the first pass of customer due diligence: it collects identity data, checks documents, screens for sanctions/PEP/adverse media, and decides whether a case can be auto-approved or needs human review. For fintech, this matters because onboarding speed is tied directly to conversion, but every bad decision creates compliance risk, audit pain, and downstream fraud exposure.

Architecture

  • Input normalization layer

    • Accepts user-submitted identity payloads, document metadata, and jurisdiction context.
    • Converts messy onboarding data into a strict schema before any agent runs.
  • KYC research agent

    • Extracts and validates identity attributes from the application.
    • Cross-checks names, DOB, address, and document details against internal rules.
  • Screening agent

    • Checks sanctions, PEP, watchlists, and adverse media sources.
    • Produces a structured risk summary with confidence and match rationale.
  • Decision agent

    • Aggregates findings from research and screening.
    • Returns one of: approve, manual_review, or reject.
  • Audit logger

    • Persists every input, tool call, intermediate output, and final decision.
    • Needed for regulator queries, internal QA, and model governance.
  • Policy layer

    • Enforces jurisdiction-specific rules like retention limits, data residency, and escalation thresholds.
    • Prevents the agent from making unsupported decisions outside policy.

Implementation

1) Install CrewAI for TypeScript and define your KYC schema

Your agent should never operate on loose JSON. Use a strict contract so every downstream step has predictable fields.

import { z } from "zod";

export const KycApplicationSchema = z.object({
  customerId: z.string(),
  fullName: z.string(),
  dateOfBirth: z.string(),
  nationality: z.string(),
  countryOfResidence: z.string(),
  documentType: z.enum(["passport", "national_id", "drivers_license"]),
  documentNumber: z.string(),
  submittedAt: z.string()
});

export type KycApplication = z.infer<typeof KycApplicationSchema>;

2) Create tasks with explicit outputs

CrewAI’s TypeScript API follows the same pattern you want in production: define agents, define tasks, then run a crew. Keep each task narrow so you can audit it independently.

import { Agent, Task, Crew } from "@crew-ai/crew";
import { KycApplicationSchema } from "./kyc-schema";

const kycResearcher = new Agent({
  name: "KYC Researcher",
  role: "Identity verification specialist",
  goal: "Validate customer identity fields against onboarding data",
});

const sanctionsScreeningAgent = new Agent({
  name: "Sanctions Screener",
  role: "Compliance screening specialist",
  goal: "Detect sanctions, PEP, and adverse media risk",
});

const decisionAgent = new Agent({
  name: "Decision Agent",
  role: "KYC case adjudicator",
  goal: "Return an onboarding decision based on policy",
});

const researchTask = new Task({
  description:
    "Review the provided KYC application and identify any inconsistencies in name, DOB, nationality, residence country, or document details.",
  expectedOutput:
    "A JSON object with fields: identityMatch:boolean, issues:string[], confidence:number",
  agent: kycResearcher,
});

const screeningTask = new Task({
  description:
    "Screen the applicant for sanctions/PEP/adverse media risk using the supplied case data.",
  expectedOutput:
    "A JSON object with fields: hit:boolean, lists:string[], rationale:string",
  agent: sanctionsScreeningAgent,
});

const decisionTask = new Task({
  description:
    "Combine identity verification and screening results. Return approve, manual_review, or reject with reasons.",
  expectedOutput:
    "A JSON object with fields: decision:'approve'|'manual_review'|'reject', reasons:string[]",
  agent: decisionAgent,
});

3) Wire the crew into a single execution path

This is the part most teams get wrong. They let the model freewheel instead of constraining inputs and outputs. Keep the orchestration simple and make the final result machine-readable.

async function runKycCheck(rawInput: unknown) {
  const application = KycApplicationSchema.parse(rawInput);

   const crew = new Crew({
    agents: [kycResearcher, sanctionsScreeningAgent, decisionAgent],
    tasks: [researchTask, screeningTask],
    verbose: true,
   });

   const result = await crew.kickoff({
    inputs: {
      application,
      policyJurisdiction: application.countryOfResidence,
      riskThresholds: {
        manualReviewOnAnyWatchlistHit: true,
        rejectOnConfirmedSanctionsMatch: true
      }
    }
   });

   return {
    customerId: application.customerId,
    result
   };
}

runKycCheck({
  customerId: "cus_12345",
  fullName: "Jane Doe",
  dateOfBirth: "1990-04-12",
  nationality: "ZA",
  countryOfResidence: "ZA",
  documentType: "passport",
  documentNumber: "AA1234567",
  submittedAt: new Date().toISOString()
}).then(console.log);

4) Add deterministic policy enforcement outside the model

Do not ask the LLM to “decide compliance” without guardrails. Use code for hard rules like confirmed sanctions hits or missing mandatory fields.

type Decision = "approve" | "manual_review" | "reject";

function applyPolicy(
  screeningHit?: boolean,
): Decision {
   if (screeningHit === true) return "manual_review";
   return "approve";
}

Production Considerations

  • Keep PII in-region

  • Store customer identity data in the same jurisdiction required by your regulator or banking partner.

  • If your CrewAI runtime calls external services for model inference or tools, verify cross-border transfer terms before sending passport numbers or addresses.

  • Log everything needed for audit

  • Persist raw input hashes, normalized payloads, task prompts, tool responses, final decisions, timestamps.

  • Make logs immutable so compliance teams can reconstruct why a case was approved or escalated.

  • Add human-in-the-loop escalation

  • Any watchlist hit should route to manual review unless you have a legally approved deterministic matching engine.

  • The agent should recommend; your case management system should decide when uncertainty is high.

  • Set strict timeouts and retries

  • KYC onboarding cannot hang because a screening source is slow.

  • Use bounded retries for external tools and fail closed into manual review when dependencies are unavailable.

Common Pitfalls

  1. Letting the model make policy decisions

    The agent can summarize evidence; it should not invent compliance rules. Put rejection thresholds in code so they are versioned and testable.

  2. Passing unvalidated onboarding payloads into tasks

    Free-form input leads to brittle prompts and bad matches. Validate with a schema like Zod before any CrewAI execution.

  3. Ignoring explainability requirements

    A plain “reject” is useless in fintech. Always capture which field mismatched, which list triggered a hit, and what rule caused escalation so auditors can review it later.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides