CrewAI Tutorial (TypeScript): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-tool-use-for-advanced-developerstypescript

This tutorial shows how to add real tool use to a CrewAI TypeScript agent, then wire it into a workflow that can call external systems instead of hallucinating answers. You need this when your agent must fetch live data, query internal services, or perform bounded actions like lookups and calculations.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • crewai installed in your project
  • An LLM API key configured for the model provider you use
  • A .env file for secrets
  • Basic familiarity with CrewAI agents, tasks, and crews
  • A tool source you can call safely, such as:
    • HTTP APIs
    • database read endpoints
    • internal search services

Step-by-Step

  1. Start with a minimal TypeScript project and install the packages you need. For this example, we’ll use dotenv for env loading and zod for input validation inside the tool.
npm init -y
npm install crewai dotenv zod
npm install -D typescript ts-node @types/node
  1. Create a .env file and load your model credentials before running anything else. Keep the tool input narrow; advanced agent setups fail when tools accept ambiguous parameters.
OPENAI_API_KEY=your_key_here
import "dotenv/config";
import { Agent, Crew, Task } from "crewai";
  1. Define a real tool as an async function with explicit validation. In CrewAI TypeScript, the agent can use tools that expose a name, description, and callable function; keep the contract strict so the model knows exactly when to invoke it.
import { z } from "zod";

const CustomerLookupInput = z.object({
  customerId: z.string().min(1),
});

export const customerLookupTool = {
  name: "customer_lookup",
  description: "Look up a customer by ID and return account status and risk tier.",
  async execute(input: unknown): Promise<string> {
    const { customerId } = CustomerLookupInput.parse(input);

    const mockDb = {
      CUST-1001: { status: "active", riskTier: "low" },
      CUST-2002: { status: "review", riskTier: "medium" },
    } as const;

    const record = mockDb[customerId as keyof typeof mockDb];
    if (!record) return `No customer found for ${customerId}`;

    return JSON.stringify({ customerId, ...record });
  },
};
  1. Create an agent that is instructed to use the tool only when needed. The important part is not just attaching the tool; it’s telling the model what the tool is for and what not to do with it.
const supportAgent = new Agent({
  name: "Support Analyst",
  role: "Customer support analyst",
  goal: "Answer questions using customer lookup data when needed",
  backstory:
    "You are precise, avoid guessing, and always use tools for customer-specific facts.",
  tools: [customerLookupTool],
});
  1. Add a task that forces the agent to demonstrate tool usage in context. Give it an input that cannot be answered correctly without calling the tool.
const task = new Task({
  description:
    "A caller asks about customer CUST-2002. Use the customer_lookup tool and summarize the result clearly.",
  expectedOutput:
    "A short answer containing the customer's status and risk tier.",
  agent: supportAgent,
});
  1. Run the crew and inspect the output. If your environment is configured correctly, the agent should call the tool instead of inventing values.
async function main() {
  const crew = new Crew({
    agents: [supportAgent],
    tasks: [task],
  });

  const result = await crew.kickoff();
  console.log(result);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});

Testing It

Run the script with npx ts-node index.ts or compile first if your project uses tsc. The output should mention CUST-2002, along with review and medium, because those values only exist inside the tool.

If you get a generic answer without tool data, tighten the task prompt so it explicitly requires lookup-based facts. If you get validation errors, check that your tool input shape matches what the agent is sending.

For production testing, add one case where the ID exists and one where it does not. That verifies both success paths and fallback behavior.

Next Steps

  • Add multiple tools and test how CrewAI chooses between them based on descriptions.
  • Wrap real HTTP calls in your tools with timeouts, retries, and structured error handling.
  • Add guardrails around sensitive operations so agents can read data without being able to mutate it unless explicitly allowed.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides