CrewAI Tutorial (TypeScript): building prompt templates for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
crewaibuilding-prompt-templates-for-intermediate-developerstypescript

This tutorial shows you how to build reusable prompt templates in CrewAI with TypeScript, then wire them into agents and tasks without hardcoding prompts all over your codebase. You need this when your team starts reusing the same instruction patterns across multiple agents, workflows, or customer segments and you want those prompts versioned, testable, and easy to maintain.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • CrewAI TypeScript package installed
  • An OpenAI API key set in your environment
  • Basic familiarity with Agent, Task, and Crew
  • A .env file or shell env vars for secrets

Install the packages:

npm install @crewai/core dotenv
npm install -D typescript ts-node @types/node

Step-by-Step

  1. Start by creating a small prompt template utility. Keep templates as functions so you can inject variables safely instead of concatenating strings in random places.
export type PromptVars = {
  productName: string;
  audience: string;
  tone: string;
};

export function buildSystemPrompt(vars: PromptVars): string {
  return [
    `You are an expert assistant for ${vars.productName}.`,
    `Write for ${vars.audience}.`,
    `Use a ${vars.tone} tone.`,
    `Be precise, structured, and avoid generic advice.`,
  ].join("\n");
}
  1. Next, create a task prompt template for the actual work you want the agent to do. This keeps the task-specific instruction separate from the system-style guidance.
type TaskVars = {
  useCase: string;
  outputFormat: string;
};

export function buildTaskPrompt(vars: TaskVars): string {
  return [
    `Create a response for this use case: ${vars.useCase}.`,
    `Return the output in this format: ${vars.outputFormat}.`,
    `Include concrete steps and implementation details.`,
    `Do not mention that you are using a template.`,
  ].join("\n");
}
  1. Now wire those templates into a CrewAI agent and task. The important part is that the agent role and goal stay stable while the prompt content changes through your template functions.
import "dotenv/config";
import { Agent, Task, Crew } from "@crewai/core";
import { buildSystemPrompt } from "./prompts/system.js";
import { buildTaskPrompt } from "./prompts/task.js";

const systemPrompt = buildSystemPrompt({
  productName: "ClaimsOps Assistant",
  audience: "insurance operations analysts",
  tone: "direct",
});

const taskPrompt = buildTaskPrompt({
  useCase: "summarize a claims intake workflow",
  outputFormat: "markdown with headings and bullets",
});

const analyst = new Agent({
  role: "Insurance Workflow Analyst",
  goal: "Produce practical workflow guidance",
  backstory: systemPrompt,
});

const task = new Task({
  description: taskPrompt,
  expectedOutput: "A clear markdown workflow summary",
  agent: analyst,
});
  1. Add the crew execution entry point. This is where your template values become runtime inputs, which makes it easy to reuse the same code path across different products or teams.
async function main() {
  const crew = new Crew({
    agents: [analyst],
    tasks: [task],
    verbose: true,
  });

  const result = await crew.kickoff();
  console.log(result);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. If you want templates that scale across multiple teams, add validation before building prompts. This prevents empty strings and bad inputs from producing weak prompts that are hard to debug later.
function assertNonEmpty(value: string, name: string): void {
  if (!value.trim()) {
    throw new Error(`${name} cannot be empty`);
  }
}

const vars = {
  productName: process.env.PRODUCT_NAME ?? "",
  audience: process.env.AUDIENCE ?? "",
  tone: process.env.TONE ?? "",
};

assertNonEmpty(vars.productName, "PRODUCT_NAME");
assertNonEmpty(vars.audience, "AUDIENCE");
assertNonEmpty(vars.tone, "TONE");

Testing It

Run the script with your environment variables set and confirm that the agent output reflects your template values instead of generic instructions. You should see the crew produce content that matches the audience, tone, and output format you passed into the builders.

A good test is to change only one variable, like tone, and verify that the output changes in a predictable way. If it does not, your prompt is probably too vague or your variables are not actually being injected.

Also check that invalid inputs fail fast before calling the model. That saves money and makes template bugs easier to catch during development.

Next Steps

  • Add templating for multi-agent workflows with shared prompt fragments
  • Store templates in versioned files and test them with snapshot tests
  • Build a small prompt registry so product teams can reuse approved instructions

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides