CrewAI Tutorial (TypeScript): building custom tools for advanced developers
This tutorial shows you how to build custom CrewAI tools in TypeScript, wire them into agents, and verify they behave like production code instead of demo code. You need this when built-in tools stop being enough and your agent has to call internal APIs, validate business rules, or wrap a domain-specific workflow.
What You'll Need
- •Node.js 20+
- •A TypeScript project with
ts-nodeortsx - •CrewAI for TypeScript installed in your project
- •An OpenAI API key exported as
OPENAI_API_KEY - •A valid project structure with
src/and atsconfig.json - •Basic familiarity with agents, tasks, and tools in CrewAI
Step-by-Step
- •Start by installing the packages you need. For a clean setup, keep the dependencies minimal and make sure your runtime can execute TypeScript directly.
npm install @crewai/crewai dotenv zod
npm install -D typescript tsx @types/node
- •Create a custom tool by extending CrewAI’s tool base class. The important part is that your tool has a stable name, a clear description, and strict input validation.
// src/tools/customerLookupTool.ts
import { z } from "zod";
import { Tool } from "@crewai/crewai";
const inputSchema = z.object({
customerId: z.string().min(1),
});
export class CustomerLookupTool extends Tool {
name = "customer_lookup";
description = "Look up a customer record by customer ID.";
async execute(input: unknown): Promise<string> {
const parsed = inputSchema.parse(input);
const customer = {
id: parsed.customerId,
name: "Amina Patel",
tier: "gold",
status: "active",
};
return JSON.stringify(customer);
}
}
- •Add a second tool that does something different so you can see how multiple tools fit together. In real systems this is where you wrap internal services, policy engines, or database-backed checks.
// src/tools/policyCheckTool.ts
import { z } from "zod";
import { Tool } from "@crewai/crewai";
const policySchema = z.object({
productCode: z.string().min(1),
amount: z.number().positive(),
});
export class PolicyCheckTool extends Tool {
name = "policy_check";
description = "Validate whether an amount is allowed for a given product code.";
async execute(input: unknown): Promise<string> {
const parsed = policySchema.parse(input);
const approved = parsed.amount <= 10000;
return JSON.stringify({
productCode: parsed.productCode,
amount: parsed.amount,
approved,
reason: approved ? "Within limit" : "Above threshold",
});
}
}
- •Wire the tools into an agent and give the agent a task that forces tool use. Keep the instruction explicit; otherwise the model may answer from memory instead of calling your code.
// src/index.ts
import "dotenv/config";
import { Agent, Crew, Task } from "@crewai/crewai";
import { CustomerLookupTool } from "./tools/customerLookupTool";
import { PolicyCheckTool } from "./tools/policyCheckTool";
const agent = new Agent({
role: "Insurance Operations Analyst",
goal: "Use tools to verify customer and policy data before responding.",
backstory: "You work on operational checks for insurance workflows.",
tools: [new CustomerLookupTool(), new PolicyCheckTool()],
});
const task = new Task({
description:
"Look up customer ID CUST-1001 and check whether product AUTO-PLATINUM can approve an amount of 8500. Return both results clearly.",
expectedOutput:
"A structured response containing the customer record and the policy decision.",
});
const crew = new Crew({
agents: [agent],
tasks: [task],
});
const result = await crew.kickoff();
console.log(String(result));
- •Run the workflow with a TypeScript runtime. If you are using
tsx, execution stays simple and close to what you will do in CI later.
npx tsx src/index.ts
- •Once it works locally, harden the tool boundary. Tools should fail loudly on invalid input, return machine-readable output, and avoid hidden side effects unless that is the point of the tool.
// Example hardening pattern inside any tool
async execute(input: unknown): Promise<string> {
try {
const parsed = inputSchema.parse(input);
// Call real service here.
return JSON.stringify({ ok: true, input: parsed });
} catch (error) {
return JSON.stringify({
ok: false,
error: error instanceof Error ? error.message : String(error),
});
}
}
Testing It
Run the script and confirm the agent returns both tool outputs instead of free-form guesses. You should see one JSON payload for the customer lookup and one for the policy check.
If the agent skips a tool, tighten the task description so it explicitly requires those calls. If validation fails, send malformed input on purpose once so you can confirm Zod is rejecting bad payloads at the boundary.
For production work, add unit tests around each tool’s execute() method first. That gives you deterministic coverage without needing to spin up a full agent run every time.
Next Steps
- •Wrap real HTTP services inside tools using
fetchplus retry logic. - •Add structured logging so every tool call includes request IDs and latency.
- •Build a shared tool library for your team so agents reuse approved business logic instead of duplicating it.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit