CrewAI Tutorial (TypeScript): testing agents locally for beginners

By Cyprian AaronsUpdated 2026-04-21
crewaitesting-agents-locally-for-beginnerstypescript

This tutorial shows you how to run a CrewAI agent locally in TypeScript, hit it with a real test input, and verify the output without wiring up a full app. You need this when you want fast feedback on prompts, tools, and agent behavior before shipping anything into a backend or workflow.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project initialized with npm init -y
  • CrewAI TypeScript package installed
  • tsx for running TypeScript files directly
  • An OpenAI API key set in your environment
  • Basic familiarity with agents, tasks, and crews in CrewAI

Install the dependencies:

npm install @crewai/crewai dotenv
npm install -D typescript tsx @types/node

Create a .env file:

OPENAI_API_KEY=your_api_key_here

Step-by-Step

  1. Create a minimal TypeScript config so Node can run your source files cleanly. This keeps the setup simple and avoids fighting module resolution while you test locally.
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "resolveJsonModule": true,
    "outDir": "dist"
  },
  "include": ["src/**/*.ts"]
}
  1. Add a small local test script that loads your API key and prints the crew output. The point here is not to build an app yet, just to make sure the agent can execute end-to-end from your machine.
import 'dotenv/config';
import { Agent, Task, Crew } from '@crewai/crewai';

async function main() {
  const researcher = new Agent({
    role: 'Research Analyst',
    goal: 'Summarize user questions clearly',
    backstory: 'You are good at concise technical explanations.',
    verbose: true,
  });

  const task = new Task({
    description: 'Explain what local testing means for CrewAI agents in one short paragraph.',
    expectedOutput: 'A short paragraph explaining local testing.',
    agent: researcher,
  });

  const crew = new Crew({
    agents: [researcher],
    tasks: [task],
  });

  const result = await crew.kickoff();
  console.log('\n=== RESULT ===\n');
  console.log(result);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. Add a package script so you can run the test repeatedly without extra typing. This is useful because local agent work is mostly iterate-run-adjust-repeat.
{
  "name": "crewai-local-test",
  "private": true,
  "type": "module",
  "scripts": {
    "test:agent": "tsx src/test-agent.ts"
  },
  "dependencies": {
    "@crewai/crewai": "^0.0.0",
    "dotenv": "^16.4.0"
  },
  "devDependencies": {
    "@types/node": "^22.0.0",
    "tsx": "^4.19.0",
    "typescript": "^5.6.0"
  }
}
  1. Run the agent locally and inspect both the logs and final answer. If verbose: true is set, you should see the agent reasoning flow in your terminal, which helps you spot prompt issues early.
npm run test:agent
  1. Tighten the test by making the output more deterministic for beginners. In practice, you want to validate structure first, then improve quality after the pipeline is stable.
import 'dotenv/config';
import { Agent, Task, Crew } from '@crewai/crewai';

async function main() {
  const agent = new Agent({
    role: 'Support Assistant',
    goal: 'Return short factual answers',
    backstory: 'You only answer with direct, practical guidance.',
    verbose: false,
  });

  const task = new Task({
    description:
      'Answer this in exactly two bullet points: How do I test a CrewAI agent locally?',
    expectedOutput: 'Two bullet points only.',
    agent,
  });

  const crew = new Crew({
    agents: [agent],
    tasks: [task],
  });

  const result = await crew.kickoff();
  console.log(String(result));
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});

Testing It

Run the script and confirm that it exits with code 0 and prints a usable response instead of an auth error or module error. If you see an OpenAI authentication failure, your .env file or shell environment is not being loaded correctly.

Next, change the task description to something specific to your domain, like claims triage or policy lookup, and rerun it three times. If outputs drift too much, tighten the expectedOutput, reduce ambiguity in the prompt, or add stronger role/backstory constraints.

A good local test also checks failure behavior. Temporarily remove OPENAI_API_KEY, rerun the command, and make sure your script fails fast with a clear error instead of hanging.

Next Steps

  • Add tools to the agent and test one tool at a time before composing multiple tools
  • Write assertion-based tests around crew.kickoff() output using Vitest or Jest
  • Move from a single-agent crew to multi-agent workflows with explicit task handoffs

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides