CrewAI Tutorial (TypeScript): adding memory to agents for beginners

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-memory-to-agents-for-beginnerstypescript

This tutorial shows you how to give a CrewAI agent short-term memory in TypeScript, so it can keep context across tasks instead of treating every prompt like a blank slate. You need this when your agent has to remember user preferences, prior decisions, or intermediate results during a workflow.

What You'll Need

  • Node.js 18+
  • A TypeScript project with tsconfig.json
  • crewai installed
  • An LLM API key, such as:
    • OPENAI_API_KEY
  • A basic CrewAI setup with at least one agent and one task
  • A terminal that can run TypeScript, either via:
    • tsx
    • or ts-node

Step-by-Step

  1. Install CrewAI and a TypeScript runner.

    If you already have a project, just add the packages below. I’m using tsx because it keeps the example simple and avoids build-step noise.

    npm install crewai
    npm install -D typescript tsx @types/node
    
  2. Set your API key in the environment.

    CrewAI needs an LLM provider behind the scenes. For this tutorial, we’ll use OpenAI, so make sure the key is available before running the script.

    export OPENAI_API_KEY="your-openai-api-key"
    
  3. Create an agent with memory enabled.

    In CrewAI, memory is turned on at the crew level. That means your agent can benefit from stored context as long as the crew is configured to retain it.

    import { Agent, Task, Crew } from "crewai";
    
    const supportAgent = new Agent({
      role: "Support Assistant",
      goal: "Help users by remembering prior context in the conversation",
      backstory: "You are a careful assistant that tracks user preferences and past answers.",
      verbose: true,
      allowDelegation: false,
    });
    
  4. Add tasks that depend on prior context.

    The first task stores useful context in the conversation flow. The second task asks the agent to use that earlier information instead of re-asking for it.

    const collectPreferences = new Task({
      description:
        "Ask the user for their preferred communication channel and store it in memory.",
      expectedOutput: "A short summary of the user's preference.",
      agent: supportAgent,
    });
    
    const followUpTask = new Task({
      description:
        "Use the stored preference to draft a follow-up message without asking again.",
      expectedOutput: "A personalized follow-up message using remembered context.",
      agent: supportAgent,
      context: [collectPreferences],
    });
    
  5. Turn on crew memory and run it.

    This is the part that actually wires memory into execution. The memory: true flag tells CrewAI to keep relevant state available across tasks in the same crew run.

    const crew = new Crew({
      agents: [supportAgent],
      tasks: [collectPreferences, followUpTask],
      verbose: true,
      memory: true,
    });
    
    async function main() {
      const result = await crew.kickoff();
      console.log("\nFinal Output:\n", result);
    }
    
    main().catch(console.error);
    
  6. Run the script and inspect whether context carries over.

    Save the file as memory-demo.ts, then run it with tsx. If memory is working, the second task should reference information established earlier in the run instead of behaving like a fresh prompt.

    npx tsx memory-demo.ts
    

Testing It

Run the script twice and compare outputs. The first task should produce a preference summary, and the second task should reuse that summary when drafting its response.

If you want a clearer test, make the first task capture something specific like “email” or “Slack,” then verify that the second task uses that exact channel without asking again. That’s the simplest signal that memory is helping maintain continuity inside the crew execution.

Also check your logs with verbose: true. You should see task flow and intermediate reasoning steps that make it obvious when earlier context is being consumed by later work.

Next Steps

  • Add persistent storage so memory survives across separate runs, not just within one kickoff.
  • Split memory by user session if you’re building multi-user workflows.
  • Combine memory with tools so agents can remember facts they fetched from APIs or databases.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides