CrewAI Tutorial (TypeScript): building custom tools for intermediate developers
This tutorial shows you how to build a CrewAI workflow in TypeScript with custom tools that your agents can call safely and predictably. You need this when the built-in tools stop being enough and you want your agent to query internal systems, validate business rules, or wrap a domain API without turning the agent into a free-form script runner.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project initialized with
npm init -y - •CrewAI for TypeScript installed
- •An OpenAI API key exported as
OPENAI_API_KEY - •
zodfor input validation - •
dotenvfor local environment loading - •A valid
.envfile in your project root
Install the packages:
npm install @crew-ai/crew-ai zod dotenv
npm install -D typescript tsx @types/node
Step-by-Step
- •Create a strict TypeScript setup and load environment variables first. This keeps tool code predictable and avoids runtime surprises when the agent starts calling your APIs.
// src/index.ts
import "dotenv/config";
import { z } from "zod";
import { Tool } from "@crew-ai/crew-ai";
const envSchema = z.object({
OPENAI_API_KEY: z.string().min(1),
});
const env = envSchema.parse(process.env);
console.log("Loaded API key:", env.OPENAI_API_KEY.slice(0, 8) + "...");
- •Build a custom tool with explicit input validation. The key pattern is: accept a small JSON payload, validate it, then return a string that the agent can reason over.
// src/index.ts
class CustomerLookupTool extends Tool {
name = "customer_lookup";
description =
"Look up a customer by email and return account status, tier, and last login.";
schema = z.object({
email: z.string().email(),
});
async run(input: unknown): Promise<string> {
const { email } = this.schema.parse(input);
const mockDb = {
"alice@acme.com": { status: "active", tier: "gold", lastLogin: "2026-04-18" },
"bob@acme.com": { status: "suspended", tier: "silver", lastLogin: "2026-03-02" },
};
const record = mockDb[email as keyof typeof mockDb];
return record ? JSON.stringify({ email, ...record }) : JSON.stringify({ email, found: false });
}
}
- •Add a second tool for policy checks. In production, this is where you wrap internal services like eligibility rules, claims validation, or risk checks instead of letting the model invent answers.
// src/index.ts
class RefundPolicyTool extends Tool {
name = "refund_policy_check";
description =
"Check whether a refund request is allowed based on amount and days since purchase.";
schema = z.object({
amountUsd: z.number().positive(),
daysSincePurchase: z.number().int().nonnegative(),
});
async run(input: unknown): Promise<string> {
const { amountUsd, daysSincePurchase } = this.schema.parse(input);
const allowed = amountUsd <= 500 && daysSincePurchase <= 30;
return JSON.stringify({
allowed,
reason: allowed
? "Meets standard refund policy"
: "Exceeds amount limit or time window",
amountUsd,
daysSincePurchase,
});
}
}
- •Wire the tools into an agent and give it one narrow job. Keep the role specific so the agent knows when to call tools instead of hallucinating an answer.
// src/index.ts
import { Agent, Task, Crew } from "@crew-ai/crew-ai";
const customerLookupTool = new CustomerLookupTool();
const refundPolicyTool = new RefundPolicyTool();
const supportAgent = new Agent({
name: "Support Analyst",
role: "Customer support analyst",
goal: "Answer refund eligibility questions using policy and customer data.",
backstory:
"You work in operations and must verify facts before responding.",
tools: [customerLookupTool, refundPolicyTool],
});
const task = new Task({
description:
"Check whether alice@acme.com is eligible for a $120 refund after 12 days.",
expectedOutput:
"A short decision with policy result and any relevant customer details.",
agent: supportAgent,
});
const crew = new Crew({
agents: [supportAgent],
tasks: [task],
});
- •Execute the crew and print the result. This gives you a clean end-to-end test path before you swap the mock logic for real services.
// src/index.ts
async function main() {
const result = await crew.kickoff();
console.log("\nCrew result:\n");
console.log(result);
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
Testing It
Run the file with npx tsx src/index.ts. If everything is wired correctly, the agent should call your tools and return a decision that includes both policy output and customer context.
If you get validation errors, check that your tool inputs match the Zod schemas exactly. That is usually where broken agent-tool integrations fail first.
If the model responds without using tools, tighten the task description and agent role so tool usage is clearly required. In production, I also log every tool call payload so I can inspect what the model tried to send.
Next Steps
- •Replace the mock maps with real HTTP calls to internal services using
fetch - •Add structured logging around every tool invocation and response
- •Split one large agent into multiple specialized agents with separate tools
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit