CrewAI Tutorial (TypeScript): adding cost tracking for beginners
This tutorial shows you how to add basic cost tracking to a CrewAI TypeScript project so you can measure token usage and estimate spend per run. You need this when your agents start making real API calls and you want a simple, reliable way to see what each task costs before the bill gets ugly.
What You'll Need
- •Node.js 18+
- •A TypeScript project with CrewAI already installed
- •An OpenAI API key
- •
crewai,openai, anddotenvpackages - •A terminal that can run TypeScript via
tsxorts-node - •Basic familiarity with CrewAI agents, tasks, and crews
Step-by-Step
- •Install the packages and set up environment variables.
Keep the model choice explicit so your cost math is predictable.
npm install crewai openai dotenv
npm install -D typescript tsx @types/node
OPENAI_API_KEY=your_openai_key_here
OPENAI_MODEL=gpt-4o-mini
OPENAI_INPUT_COST_PER_1M=0.15
OPENAI_OUTPUT_COST_PER_1M=0.60
- •Create a small cost calculator.
This keeps pricing logic out of your agent code and makes it easy to update later.
export type Usage = {
inputTokens: number;
outputTokens: number;
};
export function estimateCost(usage: Usage): number {
const inputRate = Number(process.env.OPENAI_INPUT_COST_PER_1M ?? "0");
const outputRate = Number(process.env.OPENAI_OUTPUT_COST_PER_1M ?? "0");
const inputCost = (usage.inputTokens / 1_000_000) * inputRate;
const outputCost = (usage.outputTokens / 1_000_000) * outputRate;
return inputCost + outputCost;
}
- •Build a simple crew and capture token usage from the LLM response.
In CrewAI TypeScript, you still structure work around agents and tasks, but for beginner cost tracking the practical part is logging usage from the model call itself.
import "dotenv/config";
import OpenAI from "openai";
import { estimateCost } from "./cost";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function main() {
const model = process.env.OPENAI_MODEL ?? "gpt-4o-mini";
const response = await client.chat.completions.create({
model,
messages: [
{ role: "system", content: "You are a helpful insurance assistant." },
{ role: "user", content: "Summarize why policyholders should review deductibles." },
],
});
const usage = response.usage;
if (!usage) throw new Error("No token usage returned by the API");
const cost = estimateCost({
inputTokens: usage.prompt_tokens,
outputTokens: usage.completion_tokens,
});
console.log("Assistant:", response.choices[0]?.message?.content);
console.log("Input tokens:", usage.prompt_tokens);
console.log("Output tokens:", usage.completion_tokens);
console.log("Estimated cost:", `$${cost.toFixed(6)}`);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •Wrap the execution in a reusable tracker so every run prints a clean summary.
This is the pattern you want once you move from one-off scripts to multiple crews or scheduled jobs.
type RunSummary = {
name: string;
inputTokens: number;
outputTokens: number;
};
export function printRunSummary(summary: RunSummary) {
const totalTokens = summary.inputTokens + summary.outputTokens;
const cost = estimateCost({
inputTokens: summary.inputTokens,
outputTokens: summary.outputTokens,
});
console.log("\n--- Run Summary ---");
console.log("Name:", summary.name);
console.log("Input tokens:", summary.inputTokens);
console.log("Output tokens:", summary.outputTokens);
console.log("Total tokens:", totalTokens);
console.log("Estimated cost:", `$${cost.toFixed(6)}`);
}
- •Use the tracker in your main flow and keep the numbers alongside your logs.
If you later add multiple tasks, store one summary per task so you can see which step is expensive.
import "dotenv/config";
import OpenAI from "openai";
import { printRunSummary } from "./tracker";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function main() {
const model = process.env.OPENAI_MODEL ?? "gpt-4o-mini";
const response = await client.chat.completions.create({
model,
messages: [
{ role: "system", content: "You are a claims analyst." },
{ role: "user", content: "List three common reasons claims get delayed." },
],
temperature: 0.2,
});
if (!response.usage) throw new Error("Missing usage data");
printRunSummary({
name: "claims-delay-analysis",
inputTokens: response.usage.prompt_tokens,
outputTokens: response.usage.completion_tokens,
});
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
Testing It
Run the script once with a short prompt and once with a longer prompt. You should see token counts increase on the longer run, and the estimated cost should move with it.
If response.usage is undefined, check that you are using a model that returns usage data and that your API call completed successfully. Also verify your .env file is loaded before creating the OpenAI client.
For a quick sanity check, change OPENAI_INPUT_COST_PER_1M and OPENAI_OUTPUT_COST_PER_1M to obviously different values like 10 and 20. The printed dollar amount should change immediately without touching application code.
Next Steps
- •Add per-task cost aggregation so each CrewAI task writes its own usage record.
- •Persist summaries to Postgres or SQLite instead of only logging to stdout.
- •Add budget guards that stop a crew run when estimated spend crosses a threshold.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit