How to Build a customer support Agent Using CrewAI in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
customer-supportcrewaitypescriptbanking

A banking customer support agent handles routine questions like balance explanations, card disputes, fee breakdowns, and account access issues without pushing sensitive workflows into a human queue. The point is not just deflection; it’s consistent answers, faster resolution, and an audit trail that satisfies compliance when a customer later asks, “Who said what, and why?”

Architecture

  • Customer intake layer

    • Receives chat, email, or in-app messages.
    • Normalizes the request into a structured ticket with customer ID, channel, and intent.
  • Policy and compliance guardrail

    • Blocks requests that require regulated handling.
    • Enforces rules around KYC, AML, fraud escalation, and PII exposure.
  • CrewAI agent

    • Uses Agent to classify the issue and draft the response.
    • Uses Task to constrain output to approved banking support actions.
  • Bank knowledge retrieval

    • Pulls from approved sources only: product docs, fee schedules, dispute policies, and FAQ content.
    • Keeps the model from inventing policy.
  • Human escalation path

    • Routes high-risk cases to a human queue.
    • Handles complaints, fraud claims, chargebacks, account closure requests, and legal language.
  • Audit and observability

    • Logs prompt inputs, tool calls, outputs, policy decisions, and handoff reasons.
    • Stores records in a compliant region with retention controls.

Implementation

1) Install the dependencies

You need CrewAI plus a TypeScript runtime setup. In production I keep the agent thin and push bank-specific logic into tools and policy checks.

npm install @crewai/crewai zod dotenv
npm install -D typescript tsx @types/node

Set your environment variables for model access and any internal services:

CREWAI_API_KEY=your_key
OPENAI_API_KEY=your_key
BANK_KB_URL=https://internal-bank-kb.example.com

2) Define the support agent with strict behavior

The important part is constraining the agent. For banking support, it should answer only from approved context and escalate anything that touches regulated workflows.

import "dotenv/config";
import { Agent } from "@crewai/crewai";

export const bankingSupportAgent = new Agent({
  role: "Banking Customer Support Specialist",
  goal: "Resolve routine customer support questions using approved bank policy and escalate regulated or risky cases.",
  backstory:
    "You are a bank support specialist trained on product FAQs, fee schedules, card servicing rules, and complaint handling procedures.",
  verbose: true,
  allowDelegation: false,
});

This is intentionally narrow. If you let the agent improvise on fraud or disputes, you will create compliance noise fast.

3) Create tasks for classification and response drafting

For banking support I separate intent detection from response generation. That makes audit easier and keeps escalation logic deterministic.

import { Task } from "@crewai/crewai";
import { bankingSupportAgent } from "./agent";

export const classifyTask = new Task({
  description: `
Classify this customer message into one of:
- balance_inquiry
- card_issue
- fee_question
- dispute_request
- fraud_report
- kyc_update
- complaint
- other

Return JSON only with:
{
  "intent": string,
  "risk": "low" | "medium" | "high",
  "needs_human": boolean,
  "reason": string
}

Customer message:
{message}
`,
  expectedOutput: "Strict JSON classification for routing.",
  agent: bankingSupportAgent,
});

export const responseTask = new Task({
  description: `
Draft a customer-facing reply using only approved bank policy context.
Do not mention internal systems.
Do not request full card numbers or passwords.
If the issue is high risk or regulated, instruct escalation instead of resolving it.

Customer message:
{message}

Approved context:
{context}
`,
  expectedOutput: "A concise banking-safe response.",
  agent: bankingSupportAgent,
});

4) Run the crew with a simple routing layer

This is where production code matters. You do not want the LLM deciding whether fraud gets handled automatically. Your router should make that call first.

import { Crew } from "@crewai/crewai";
import { classifyTask, responseTask } from "./tasks";

type SupportRequest = {
  message: string;
};

function shouldEscalate(intent: string): boolean {
  return ["dispute_request", "fraud_report", "kyc_update", "complaint"].includes(intent);
}

export async function handleSupportRequest(req: SupportRequest) {
  const classificationCrew = new Crew({
    agents: [classifyTask.agent!],
    tasks: [classifyTask],
    verbose: true,
  });

  const classificationResult = await classificationCrew.kickoff({
    inputs: { message: req.message },
  });

  const parsed = JSON.parse(String(classificationResult));
  
  if (parsed.needs_human || shouldEscalate(parsed.intent)) {
    return {
      route: "human",
      reason: parsed.reason,
      message:
        "Thanks for reaching out. A specialist needs to review this request before we can continue.",
    };
    }

  const approvedContext =
    "- Card replacement takes 5 to 7 business days.\n" +
    "- Fee waiver requests can be reviewed once per quarter.\n" +
    "- Balance inquiries can be answered without account changes.";

  const responseCrew = new Crew({
    agents: [responseTask.agent!],
    tasks: [responseTask],
    verbose: true,
  });

  const responseResult = await responseCrew.kickoff({
    inputs: {
      message: req.message,
      context: approvedContext,
    },
  });

  return {
    route: "agent",
    reply: String(responseResult),
    audit: {
      intent: parsed.intent,
      risk: parsed.risk,
      reason: parsed.reason,
    },
  };
}

A few things to notice here:

  • The router handles hard escalation rules outside the model.
  • The model only drafts responses after policy approval.
  • The audit object captures why the request was routed a certain way.

Production Considerations

  • Keep data residency explicit

    Banking data often cannot leave approved regions. Pin your model endpoints, logs, vector stores, and backups to compliant jurisdictions.

  • Log everything needed for audit

    Store prompt version, retrieved policy snippets, task outputs, routing decisions, timestamps, and human handoff reasons. If compliance asks why a fee waiver was denied or escalated, you need evidence.

  • Add guardrails before generation

    Redact PANs, CVVs, passwords, OTPs, national IDs, and full account numbers before they hit the model. Use deterministic filters first; do not rely on the LLM to self-censor.

  • Use approval-based retrieval

    Only index bank-owned documentation that has been reviewed by legal or operations. Never let the agent browse arbitrary web content for financial policy answers.

Common Pitfalls

  1. Letting the model decide on regulated cases

    • Mistake: asking the agent to “handle everything.”
    • Fix: hard-code escalation for disputes, fraud reports, complaints, KYC changes, sanctions-related language, and legal threats.
  2. Using unvetted knowledge sources

    • Mistake: connecting the agent to broad search results or stale internal docs.
    • Fix: build a curated knowledge base with versioned policy documents and approval workflow ownership.
  3. Skipping auditability

    • Mistake: only storing final replies.
    • Fix: persist classification output, retrieved context IDs, prompt version hashes, tool calls if any are added later, and human override actions.
  4. Ignoring PII handling

    • Mistake: passing raw chat transcripts into prompts.
    • Fix: redact sensitive fields before inference and mask them in logs. For banking support agents this is not optional; it is table stakes for compliance.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides