How to Build a customer support Agent Using CrewAI in TypeScript for retail banking

By Cyprian AaronsUpdated 2026-04-21
customer-supportcrewaitypescriptretail-banking

A customer support agent for retail banking handles routine account questions, card disputes, fee explanations, branch information, and status updates on common requests. It matters because most of these interactions are high-volume, repetitive, and sensitive: if you get compliance, auditability, or data handling wrong, you create operational risk fast.

Architecture

  • Intent router
    • Classifies the customer request: balance question, dispute, fee reversal, card replacement, branch hours, or escalation.
  • Knowledge retrieval layer
    • Pulls from approved bank policies, product FAQs, and servicing playbooks.
    • Must only use curated sources; no open-ended web browsing for regulated answers.
  • Support agent
    • Uses CrewAI Agent to draft responses in a controlled tone.
    • Should be constrained to policy-backed answers and escalation rules.
  • Task orchestration
    • Uses Task objects to separate classification, retrieval, response drafting, and compliance review.
  • Compliance guardrail
    • Checks for prohibited content: promises about reversals, legal advice, or requests for full PAN/CVV/OTP.
  • Audit and logging
    • Stores prompt inputs, retrieved sources, model output, and final action for later review.

Implementation

  1. Install CrewAI and set up the model provider

    Use a model that your bank has approved for customer data handling. In practice that usually means an enterprise LLM endpoint with region controls and logging disabled at the provider level where required.

    npm install crewai @langchain/openai zod
    
    import { ChatOpenAI } from "@langchain/openai";
    
    export const llm = new ChatOpenAI({
      model: "gpt-4o-mini",
      apiKey: process.env.OPENAI_API_KEY,
      temperature: 0.2,
    });
    
  2. Define agents with narrow responsibilities

    Do not build one giant agent that “does everything.” Split classification and customer response into separate Agent instances so you can inspect each step and enforce policy at the task boundary.

    import { Agent } from "crewai";
    import { llm } from "./llm";
    
    export const intentAgent = new Agent({
      role: "Bank Support Intent Classifier",
      goal: "Classify retail banking customer issues into a fixed set of support intents.",
      backstory:
        "You work in a retail bank contact center. You only classify requests using approved labels.",
      llm,
      verbose: true,
    });
    
    export const supportAgent = new Agent({
      role: "Retail Banking Support Specialist",
      goal:
        "Draft accurate customer support responses using only approved policy context.",
      backstory:
        "You answer routine retail banking questions and escalate anything that needs human review.",
      llm,
      verbose: true,
    });
    
    export const complianceAgent = new Agent({
      role: "Bank Compliance Reviewer",
      goal:
        "Reject unsafe or non-compliant responses before they reach the customer.",
      backstory:
        "You enforce banking policy, privacy rules, audit requirements, and escalation criteria.",
      llm,
      verbose: true,
    });
    
  3. Create tasks for classification, response drafting, and review

    The key pattern is to keep outputs structured. For retail banking, I want the classifier to emit a strict label set and the response task to include source citations from internal policy text.

    import { Task } from "crewai";
    import { intentAgent, supportAgent, complianceAgent } from "./agents";
    
    const allowedIntents = [
      "balance_inquiry",
      "card_dispute",
      "fee_explanation",
      "card_replacement",
      "branch_info",
      "human_escalation",
    ] as const;
    
    export const classifyTask = new Task({
      description: `
        Classify this customer message into exactly one of:
        ${allowedIntents.join(", ")}.
        Customer message: "{message}"
        Return only the label.
      `,
      expectedOutput: "One allowed intent label.",
      agent: intentAgent,
    });
    
    export const draftReplyTask = new Task({
      description: `
        Write a concise customer support reply for this retail banking request.
        Use only the provided policy context and do not invent facts.
        Customer message: "{message}"
        Intent: "{intent}"
        Policy context: "{policyContext}"
        If the request requires identity verification or human review, say so clearly.
      `,
      expectedOutput:
        "A compliant customer-facing reply with no prohibited claims.",
      agent: supportAgent,
    });
    
    export const reviewTask = new Task({
      description: `
        Review this draft for retail banking compliance issues.
        Reject any mention of CVV/OTP/PIN requests, guaranteed outcomes,
        legal advice, or unsupported fee waivers.
        Draft reply: "{draft}"
        Return either APPROVED or REJECTED with a short reason.
      `,
      expectedOutput: "APPROVED or REJECTED plus reason.",
      agent: complianceAgent,
    });
    
  4. Run the workflow with explicit policy context

    In production I would fetch policyContext from an internal knowledge base or document store that contains only approved servicing content. The point is that the model should answer from bank-owned material, not memory.

     import { Crew } from "crewai";
     import { classifyTask, draftReplyTask, reviewTask } from "./tasks";
    
     async function handleCustomerMessage(message: string) {
       const crew = new Crew({
         agents: [],
         tasks: [classifyTask],
         verbose: true,
       });
    
       const classificationResult = await crew.kickoff({ message });
       const intent = String(classificationResult).trim();
    
       const policyContext =
         intent === "card_dispute"
           ? "Card disputes must be filed within policy windows. Do not promise chargeback outcomes."
           : intent === "fee_explanation"
           ? "Explain fees based on published account schedule only."
           : "Use standard retail servicing guidance.";
    
       const draftCrew = new Crew({
         agents: [],
         tasks: [draftReplyTask],
         verbose: true,
       });
    
       const draftResult = await draftCrew.kickoff({
         message,
         intent,
         policyContext,
       });
    
       const draft = String(draftResult);
    
       const reviewCrew = new Crew({
         agents: [],
         tasks: [reviewTask],
         verbose: true,
       });
    
       const reviewResult = await reviewCrew.kickoff({ draft });
       return { intent, draft, reviewResult };
     }
    
     handleCustomerMessage("I was charged a monthly fee on my savings account.")
       .then(console.log)
       .catch(console.error);
    

Production Considerations

  • Data residency

    • Keep prompts and retrieved policy data inside approved regions.
    • If your bank requires EU-only or country-specific processing, enforce that at the model endpoint and vector store level.
  • Auditability

    • Log the input message hash, retrieved document IDs, classified intent, generated draft, reviewer decision, and final response.
    • Do not log raw PANs, CVVs, OTPs, passwords, or full account numbers.
  • Guardrails

    • Block any output that asks for secret credentials or makes unsupported promises like “your fee will be reversed.”
    • Route anything involving fraud claims, complaints about discrimination, legal threats, or vulnerable customers to a human queue.
  • Monitoring

    • Track fallback rate to human agents by intent.
    • Watch for repeated compliance rejections; that usually means your policy context is incomplete or stale.

Common Pitfalls

  • Using one agent for classification and response

    • This makes debugging painful and increases hallucination risk.
    • Split tasks so each step has one job and one output format.
  • Letting the model answer without approved policy context

    • Retail banking answers need source-backed behavior.
    • Feed in curated internal content only; never rely on generic model knowledge for fees, disputes, or disclosures.
  • Skipping compliance review on “simple” requests

    • Fee explanations and card issues still carry regulatory risk.
    • Run every outbound response through a reviewer step or deterministic rules engine before sending it to customers.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides