CrewAI Tutorial (TypeScript): adding memory to agents for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-memory-to-agents-for-advanced-developerstypescript

This tutorial shows how to give CrewAI agents persistent memory in TypeScript so they can carry context across tasks, sessions, and user interactions. You need this when your agent should remember prior decisions, customer details, or workflow state instead of starting from zero every run.

What You'll Need

  • Node.js 18+ and npm
  • A CrewAI TypeScript project already set up
  • crewai installed in your project
  • An LLM API key configured in your environment
    • Example: OPENAI_API_KEY
  • A memory backend you can run locally
    • For this tutorial: Redis
  • redis npm package if you want to test the connection from Node
  • Basic familiarity with:
    • Agent
    • Task
    • Crew
    • Process

Step-by-Step

  1. Install the runtime dependencies.
    Memory only matters if the agent can persist and retrieve state, so start by installing CrewAI plus a Redis client for validation.
npm install crewai redis dotenv
  1. Create a .env file with your model and memory settings.
    Keep API keys out of source control, and point the agent at a Redis instance that will hold conversation state.
OPENAI_API_KEY=your_openai_key_here
REDIS_URL=redis://localhost:6379
  1. Start Redis and verify it is reachable.
    If Redis is not healthy, memory will look like it works for one run and then disappear on the next.
import { createClient } from "redis";
import "dotenv/config";

async function main() {
  const client = createClient({ url: process.env.REDIS_URL });
  client.on("error", (err) => console.error("Redis error:", err));

  await client.connect();
  await client.set("crewai:memory:test", "ok");
  const value = await client.get("crewai:memory:test");

  console.log({ redisConnected: value === "ok" });
  await client.quit();
}

main().catch(console.error);
  1. Define an agent with memory enabled.
    The key part is turning on memory at the agent level so prior context can be retrieved when the same user or thread comes back later.
import "dotenv/config";
import { Agent } from "crewai";

const supportAgent = new Agent({
  role: "Customer Support Analyst",
  goal: "Resolve customer issues using prior conversation context",
  backstory:
    "You handle insurance support cases and remember important customer facts across sessions.",
  verbose: true,
  memory: true,
});

console.log("Agent created:", supportAgent.role);
  1. Add a task that forces the agent to use remembered context.
    In production, this is where you ask for continuity: previous claim number, policy preference, or unresolved issue from an earlier interaction.
import { Task } from "crewai";

const followUpTask = new Task({
  description:
    "Review the customer's current message and use any remembered context to answer consistently.",
  expectedOutput:
    "A concise response that references prior relevant context when available.",
  agent: supportAgent,
});

console.log("Task ready:", followUpTask.description);
  1. Run a crew with sequential execution so memory has a predictable path through the workflow.
    Sequential processing is easier to debug when you are validating whether memory is being read and written correctly.
import { Crew, Process } from "crewai";

const crew = new Crew({
  agents: [supportAgent],
  tasks: [followUpTask],
  process: Process.sequential,
});

async function runCrew() {
  const result = await crew.kickoff({
    inputs: {
      customer_name: "Amina",
      previous_case_id: "CLM-10492",
      current_message: "I need an update on my claim status.",
    },
  });

  console.log(String(result));
}

runCrew().catch(console.error);

Testing It

Run the script twice with the same inputs and compare the outputs. On the second run, the agent should behave as if it has prior context instead of treating the request like a fresh case.

If you are using Redis-backed memory in your setup, inspect keys after execution to confirm state is being written somewhere durable. In practice, you want to see stable retrieval across separate Node processes, not just within one runtime.

For a real test, change only one field in inputs, such as current_message, while keeping previous_case_id constant. The response should stay consistent on identity-sensitive details like customer name and case reference.

If memory does not appear to work, check three things first:

  • Your LLM key is valid
  • Redis is reachable on the configured URL
  • The agent was created with memory: true

Next Steps

  • Add short-term vs long-term memory separation for different workflows.
  • Persist conversation metadata by tenant or customer ID for multi-user systems.
  • Combine memory with tools so agents can retrieve CRM or policy data before answering.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides