How to Build a KYC verification Agent Using AutoGen in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
kyc-verificationautogentypescriptwealth-management

A KYC verification agent for wealth management takes client onboarding data, checks it against policy and external sources, flags missing evidence, and produces a decision-ready summary for compliance. It matters because private banking and advisory firms cannot afford slow onboarding, inconsistent reviews, or weak audit trails when dealing with high-net-worth clients, beneficial ownership, source-of-funds checks, and jurisdiction-specific requirements.

Architecture

  • Intake layer

    • Accepts structured client data: identity docs, address proof, tax residency, beneficial owners, source of wealth/funds.
    • Normalizes inputs before the agent sees them.
  • KYC policy engine

    • Encodes firm rules: required fields by jurisdiction, PEP/sanctions escalation thresholds, document freshness windows.
    • Keeps deterministic checks outside the LLM.
  • AutoGen orchestrator

    • Uses AssistantAgent for reasoning and UserProxyAgent for tool execution / human-in-the-loop review.
    • Coordinates evidence gathering and decision drafting.
  • Verification tools

    • Document OCR/parser
    • Sanctions/PEP lookup
    • Internal CRM / account history lookup
    • Residency and tax classification service
  • Audit logger

    • Stores prompts, tool calls, outputs, timestamps, reviewer actions.
    • Required for model risk management and regulator review.
  • Decision output

    • Produces one of: approve, reject, needs_manual_review.
    • Includes rationale mapped to policy clauses.

Implementation

1) Install AutoGen and define the KYC payload

For TypeScript, use the AutoGen package that exposes AssistantAgent and UserProxyAgent. Keep your KYC payload explicit; don’t let the model infer missing compliance fields.

npm install @autogenai/autogen openai zod
import { z } from "zod";

export const KycInputSchema = z.object({
  clientId: z.string(),
  fullName: z.string(),
  countryOfResidence: z.string(),
  taxResidency: z.array(z.string()),
  documentIds: z.array(z.string()),
  beneficialOwners: z.array(
    z.object({
      name: z.string(),
      ownershipPercent: z.number(),
    })
  ),
});

export type KycInput = z.infer<typeof KycInputSchema>;

2) Wire the AutoGen agents

The pattern here is simple: the assistant reasons over evidence, while the user proxy executes tools and can stop for human approval. For wealth management, this split matters because you want deterministic controls around sanctions hits and enhanced due diligence.

import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const kycAnalyst = new AssistantAgent({
  name: "kyc_analyst",
  systemMessage: [
    "You are a KYC analyst for a wealth management firm.",
    "Follow policy strictly.",
    "Never approve without sufficient evidence.",
    "If sanctions/PEP or missing beneficial ownership data exists, escalate to manual review.",
    "Return JSON with decision, reasons, missing_items, and risk_flags.",
  ].join(" "),
});

const complianceReviewer = new UserProxyAgent({
  name: "compliance_reviewer",
  humanInputMode: "NEVER",
});

3) Add deterministic tools before asking the model to decide

Use tools for facts. The agent should not invent sanctions results or doc validity. In production you’d replace these stubs with your internal services or vendor APIs.

type ToolResult = {
  ok: boolean;
  data?: unknown;
};

async function lookupSanctions(name: string): Promise<ToolResult> {
  // Replace with vendor API call
  return { ok: true, data: { hit: false } };
}

async function fetchClientDocuments(documentIds: string[]): Promise<ToolResult> {
  // Replace with document service / OCR pipeline
  return {
    ok: true,
    data: documentIds.map((id) => ({ id, status: "verified", expiresAt: "2027-01-01" })),
  };
}

async function getKycEvidence(input: KycInput) {
  const [sanctions, docs] = await Promise.all([
    lookupSanctions(input.fullName),
    fetchClientDocuments(input.documentIds),
  ]);

  return {
    client: input,
    sanctions,
    docs,
    policyVersion: "wm-kyc-v3.4",
    jurisdictionRules: ["residency_check", "ubo_required", "source_of_funds_required"],
  };
}

4) Run the agent and force a structured response

This is the actual workflow pattern you want in a regulated environment. The agent gets evidence plus policy context and returns a structured decision that your app can validate before persisting.

import { KycInputSchema } from "./schemas";

async function runKycVerification(rawInput: unknown) {
  const input = KycInputSchema.parse(rawInput);
  const evidence = await getKycEvidence(input);

  const task = `
Assess this wealth-management KYC case using the supplied evidence only.

Return valid JSON:
{
  "decision": "approve" | "reject" | "needs_manual_review",
  "reasons": string[],
  "missing_items": string[],
  "risk_flags": string[]
}

Evidence:
${JSON.stringify(evidence)}
`;

const result = await complianceReviewer.initiateChat(
    kycAnalyst,
    task,
    { maxTurns: 2 }
);

return result;
}

If you want stronger guardrails, validate the model output again with Zod before writing it to your case management system:

const DecisionSchema = z.object({
decision:
    z.enum(["approve", "reject", "needs_manual_review"]),
reasons:
        z.array(z.string()),
missing_items:
            z.array(z.string()),
risk_flags:
                z.array(z.string()),
});

Production Considerations

  • Data residency

    • Keep client PII inside approved regions.
    • If your firm operates across Switzerland, UK, EU, and Singapore entities, route cases to region-specific model endpoints and storage buckets.
  • Auditability

    • Persist prompt version, policy version, tool outputs, final decision, and reviewer overrides.
    • Regulators care about why a case was approved as much as the approval itself.
  • Guardrails

    • Block auto-approval when beneficial ownership is incomplete.
    • Force manual review on PEP matches, adverse media hits, or source-of-funds ambiguity.
    • Never let the LLM make final decisions on its own.
  • Monitoring

    • Track false positive rates on sanctions screening.
    • Measure time-to-onboard by jurisdiction.
    • Alert on drift in rejection reasons or repeated manual overrides by reviewers.

Common Pitfalls

  1. Letting the model infer missing compliance facts

    • Avoid this by keeping sanctions checks, doc validation, and residency rules in tools or policy code.
    • The agent should summarize facts, not manufacture them.
  2. Skipping structured output validation

    • Don’t trust free-form text from an LLM in a regulated workflow.
    • Validate responses with Zod before storing them or sending them to downstream systems.
  3. Ignoring jurisdiction-specific rules

    • A private bank onboarding a UAE resident with offshore entities has different obligations than a domestic retail client.
    • Encode rules per jurisdiction and entity type; do not use one global prompt for all cases.
  4. Weak audit trails

    • If you cannot reconstruct which documents were checked and which policy version was applied, your workflow is not production-ready.
    • Log everything needed for internal audit and external examination.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides