Haystack Tutorial (TypeScript): persisting agent state for advanced developers
This tutorial shows how to persist agent state in a Haystack TypeScript app so conversations, tool results, and intermediate context survive process restarts. You need this when your agent is handling long-running workflows, multi-turn support cases, or anything that cannot lose state between requests.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
tsconfig.json - •
haystackinstalled in your project - •An LLM API key for the model you want to use
- •A place to persist state:
- •local JSON file for development
- •Redis, Postgres, or S3-backed storage for production
- •Basic familiarity with Haystack pipelines and chat/message objects
Step-by-Step
- •Start with a small state shape that captures the minimum useful context. For agent persistence, do not store raw prompts only; store messages plus any derived fields you need for routing or recovery.
export type AgentState = {
sessionId: string;
messages: Array<{
role: "system" | "user" | "assistant" | "tool";
content: string;
}>;
lastToolName?: string;
updatedAt: string;
};
- •Create a file-backed store so you can verify persistence before moving to Redis or a database. This keeps the example executable and makes the save/load behavior obvious.
import { promises as fs } from "node:fs";
const STATE_FILE = "./agent-state.json";
export async function loadState(): Promise<AgentState | null> {
try {
const raw = await fs.readFile(STATE_FILE, "utf8");
return JSON.parse(raw) as AgentState;
} catch {
return null;
}
}
export async function saveState(state: AgentState): Promise<void> {
await fs.writeFile(STATE_FILE, JSON.stringify(state, null, 2), "utf8");
}
- •Build the agent loop around persisted state instead of ephemeral in-memory variables. The important pattern is: load state at startup, append new user input, call the model, then write the updated transcript back to storage.
import { ChatPromptBuilder } from "@haystack-ai/core";
import { OpenAIChatGenerator } from "@haystack-ai/openai";
const generator = new OpenAIChatGenerator({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY!,
});
const promptBuilder = new ChatPromptBuilder();
export async function runTurn(sessionId: string, userText: string) {
const existing = await loadState();
const state: AgentState = existing ?? {
sessionId,
messages: [{ role: "system", content: "You are a helpful support agent." }],
updatedAt: new Date().toISOString(),
};
state.messages.push({ role: "user", content: userText });
const prompt = promptBuilder.build({
messages: state.messages.map((m) => ({ role: m.role, content: m.content })),
});
const result = await generator.run({ messages: prompt.messages });
const reply = result.replies[0].content;
state.messages.push({ role: "assistant", content: reply });
state.updatedAt = new Date().toISOString();
await saveState(state);
return reply;
}
- •Add tool-result persistence if your agent uses external actions. In real systems, this matters because the next turn often depends on whether a tool already ran and what it returned.
export async function recordToolResult(
sessionId: string,
toolName: string,
toolOutput: string,
) {
const existing = await loadState();
if (!existing || existing.sessionId !== sessionId) {
throw new Error("No matching session found");
}
existing.lastToolName = toolName;
existing.messages.push({
role: "tool",
content: `${toolName}: ${toolOutput}`,
});
existing.updatedAt = new Date().toISOString();
await saveState(existing);
}
- •Wrap it in a simple CLI entry point so you can test persistence across separate process runs. The first run creates the file; the second run should continue from the prior transcript instead of starting fresh.
async function main() {
const sessionId = process.env.SESSION_ID ?? "demo-session";
const userText = process.argv.slice(2).join(" ") || "Hello";
const reply = await runTurn(sessionId, userText);
console.log(`Assistant: ${reply}`);
const current = await loadState();
console.log(`Stored messages: ${current?.messages.length ?? 0}`);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
Testing It
Run the script twice with the same SESSION_ID. On the first run, it should create agent-state.json and store the initial conversation turn. On the second run, confirm that Stored messages increases instead of resetting to the system prompt.
Check that assistant replies change based on earlier turns. If you ask a follow-up like “What did I just ask you?”, the model should have access to prior messages because they were reloaded from disk.
Inspect agent-state.json directly. You should see a full transcript plus metadata like updatedAt and lastToolName, which is enough to rebuild context after a restart.
Next Steps
- •Replace the file store with Redis using a TTL per session.
- •Add optimistic locking so concurrent turns do not overwrite each other.
- •Persist structured tool outputs separately from message history for cleaner recovery paths.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit