CrewAI Tutorial (TypeScript): testing agents locally for advanced developers
This tutorial shows you how to run and test CrewAI agents locally in TypeScript without wiring them straight into production infrastructure. You’ll build a small local harness, stub the model layer where needed, and verify agent behavior before you connect it to real tools, queues, or customer data.
What You'll Need
- •Node.js 20+
- •npm 10+ or pnpm
- •A TypeScript project with
ts-nodeortsx - •
crewaiinstalled in your project - •An LLM API key for real runs:
- •
OPENAI_API_KEY, or - •whatever provider your CrewAI setup is configured to use
- •
- •Optional for local-only testing:
- •a mock server
- •fixture files
- •
dotenvfor environment management
Step-by-Step
- •Start with a clean TypeScript project and install the dependencies you need for local execution. For local testing, keep the runtime simple: one entry file, one test harness, and environment variables loaded from
.env.
mkdir crewai-local-test && cd crewai-local-test
npm init -y
npm install crewai dotenv
npm install -D typescript tsx @types/node
npx tsc --init
- •Create a minimal agent and task definition in TypeScript. This example keeps the agent focused on one job so you can validate behavior deterministically before adding tools or multi-agent orchestration.
// src/agent.ts
import "dotenv/config";
import { Agent } from "crewai";
export const supportAgent = new Agent({
role: "Insurance Claims Analyst",
goal: "Review claim summaries and identify missing information",
backstory: "You are strict about evidence, concise in output, and careful with policy language.",
verbose: true,
allowDelegation: false,
});
- •Add a local runner that executes the agent against a fixed prompt. The important part here is that the input is stable, so every run can be compared against a known expected output.
// src/run-local.ts
import "dotenv/config";
import { Agent } from "crewai";
const agent = new Agent({
role: "Insurance Claims Analyst",
goal: "Review claim summaries and identify missing information",
backstory: "You are strict about evidence, concise in output, and careful with policy language.",
verbose: true,
});
async function main() {
const result = await agent.execute(
"Claim summary: customer reports water damage on kitchen ceiling. Missing items: photos, repair invoice, incident date."
);
console.log("\n--- RESULT ---\n");
console.log(result);
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
- •If you want repeatable local tests without hitting the model every time, wrap execution behind an adapter and swap it in tests. This lets you validate your application logic while keeping external calls out of unit tests.
// src/agent-runner.ts
import { Agent } from "crewai";
export async function runClaimReview(input: string) {
const agent = new Agent({
role: "Insurance Claims Analyst",
goal: "Review claim summaries and identify missing information",
backstory: "You are strict about evidence, concise in output, and careful with policy language.",
verbose: false,
allowDelegation: false,
});
return agent.execute(input);
}
- •Create a simple test file that mocks the runner instead of the model. For advanced developers, this is the right boundary: test your orchestration code locally, not the provider’s stochastic output.
// src/agent-runner.test.ts
import assert from "node:assert/strict";
import { test } from "node:test";
import { runClaimReview } from "./agent-runner";
test("claim review returns actionable missing items", async () => {
const input =
"Claim summary: customer reports water damage on kitchen ceiling. Missing items: photos, repair invoice, incident date.";
const output = await runClaimReview(input);
assert.ok(typeof output === "string");
});
- •Run the local script first, then run tests with Node’s built-in test runner or your preferred framework. If you’re using real API calls during manual validation, keep a small set of prompts in fixtures so you can compare outputs across changes.
npx tsx src/run-local.ts
node --test src/agent-runner.test.ts
Testing It
Verify three things before you move on. First, confirm the script boots without import errors and reads your environment correctly. Second, check that the agent returns structured text that matches your expected shape for a claims review workflow.
If output quality varies too much between runs, reduce randomness at the provider level and tighten the prompt. For serious local testing, keep one set of deterministic fixtures for unit tests and one set of live prompts for manual smoke checks.
Next Steps
- •Add tools to the agent and test tool boundaries separately from model behavior.
- •Introduce multi-agent workflows with isolated runners for each role.
- •Build snapshot tests around normalized outputs so prompt changes are visible in Git diffs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit