How to Fix 'context length exceeded during development' in CrewAI (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
context-length-exceeded-during-developmentcrewaitypescript

What the error means

context length exceeded during development usually means CrewAI tried to send too much text into the model at once. In TypeScript projects, this shows up when you pass large task outputs, long chat history, or oversized tool results into an agent prompt.

You’ll typically hit it during local development when chaining tasks, reusing memory, or stuffing raw documents into task.description or expectedOutput.

The Most Common Cause

The #1 cause is passing full documents, logs, or previous task outputs directly into the next agent’s context. CrewAI then builds a prompt that exceeds the model’s token limit and throws a context window error.

Here’s the broken pattern versus the fixed one:

BrokenFixed
Pass entire raw output forwardSummarize, extract, or truncate before reuse
Chain large strings into task descriptionsKeep prompts small and specific
Let memory accumulate everythingReset or scope memory per workflow
import { Agent, Task, Crew } from "crewai";

// BROKEN: raw output is reused as-is
const analyst = new Agent({
  role: "Analyst",
  goal: "Analyze claims notes",
  backstory: "You analyze insurance claims.",
});

const reviewer = new Agent({
  role: "Reviewer",
  goal: "Review analysis",
  backstory: "You validate analysis quality.",
});

const longClaimsNotes = await fetchHugeClaimsDocument(); // thousands of lines

const task1 = new Task({
  description: `Analyze this document:\n\n${longClaimsNotes}`,
  agent: analyst,
});

const result1 = await task1.execute();

// This is where things blow up
const task2 = new Task({
  description: `Review the previous analysis and compare it to this entire output:\n\n${result1}`,
  agent: reviewer,
});
import { Agent, Task } from "crewai";

// FIXED: pass only the relevant slice or summary
const analyst = new Agent({
  role: "Analyst",
  goal: "Analyze claims notes",
  backstory: "You analyze insurance claims.",
});

const reviewer = new Agent({
  role: "Reviewer",
  goal: "Review analysis",
  backstory: "You validate analysis quality.",
});

const longClaimsNotes = await fetchHugeClaimsDocument();
const trimmedNotes = longClaimsNotes.slice(0, 12000); // better yet: chunk it

const task1 = new Task({
  description: `Extract only:
- claim type
- key dates
- loss amount
- missing info

Text:
${trimmedNotes}`,
  agent: analyst,
});

const result1 = await task1.execute();

const summaryOnly = result1.summary ?? result1.output?.slice(0, 2000);

const task2 = new Task({
  description: `Review this summary for correctness:\n\n${summaryOnly}`,
  agent: reviewer,
});

The fix is not “increase the limit” first. The fix is to stop feeding the model junk it doesn’t need.

Other Possible Causes

1. Tool output is too large

A tool that returns a full HTML page, PDF text dump, or database export can blow up your context immediately.

// BAD
tools: [
  async () => {
    return await getAllPolicyDocuments(); // huge payload
  },
];

// GOOD
tools: [
  async () => {
    const docs = await getAllPolicyDocuments();
    return docs.slice(0, 3).map(d => ({
      id: d.id,
      title: d.title,
      excerpt: d.excerpt.slice(0, 1000),
    }));
  },
];

2. Memory is carrying too much conversation history

If you keep a long-running Crew with memory enabled, older messages accumulate and eventually exceed the model window.

// BAD
const crew = new Crew({
  agents,
  tasks,
  memory: true,
});

// GOOD
const crew = new Crew({
  agents,
  tasks,
  memory: false, // or scope memory per case/session
});

If you need memory, store summaries instead of full transcripts.

3. Prompt templates are overly verbose

Sometimes the issue is just bloated instructions. A giant system prompt plus a giant user prompt equals failure.

// BAD
description: `
You are an expert insurance operations assistant.
Follow these rules...
[200 lines of policy text]
[500 lines of examples]
Now analyze this claim...
`,

// GOOD
description: `
Extract claim facts from the input.
Return JSON with:
- claimId
- incidentDate
- coverageStatus
- missingFields
`,

4. You are concatenating multiple outputs before one final call

This happens in pipelines where every step appends more text to one string.

// BAD
let context = "";
for (const output of outputs) {
  context += "\n\n" + output;
}

// GOOD
const latestRelevantOutput = outputs.at(-1);
const compressedContext = summarizeOutputs(outputs);

How to Debug It

  1. Log prompt size before execution
    Print description.length, tool response sizes, and any accumulated memory. If one field is massive, you found your culprit.

  2. Disable memory first
    Set memory: false on Crew. If the error disappears, your conversation history is too large.

  3. Replace tools with stubs
    Mock each tool to return a tiny fixed payload. If the error goes away, one tool is returning too much data.

  4. Binary search your prompt
    Remove half of your instructions or input data at a time until the error disappears. The last removed chunk is usually the offender.

A practical check looks like this:

console.log("task description chars:", task.description.length);
console.log("tool output chars:", JSON.stringify(toolResult).length);
console.log("memory enabled:", crewConfig.memory);

If you’re using OpenAI-compatible models under CrewAI TypeScript bindings, you may also see errors like:

  • 400 Bad Request
  • context_length_exceeded
  • This model's maximum context length is ...
  • Request too large for gpt-4o-mini
  • messages exceeds maximum length

Those are all pointing at the same class of problem.

Prevention

  • Keep each task narrow. One task should do one thing well.
  • Summarize between steps instead of passing raw transcripts forward.
  • Put hard caps on tool outputs:
    • max rows returned from queries
    • max characters from documents
    • max items in arrays passed to prompts

If you build multi-step workflows in CrewAI TypeScript, treat context like memory in production systems: finite, expensive, and easy to waste.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides