Haystack Tutorial (TypeScript): persisting agent state for beginners
This tutorial shows how to keep a Haystack agent’s state across multiple TypeScript runs by saving it to disk and loading it back later. You need this when your agent has memory, conversation history, or tool context that should survive process restarts instead of resetting every time.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
"type": "module"enabled - •Haystack for TypeScript installed in your project
- •An OpenAI API key exported as
OPENAI_API_KEY - •Basic familiarity with Haystack agents, tools, and chat messages
- •File system access for reading and writing the saved state file
Step-by-Step
- •Create a small TypeScript project and install the dependencies.
Keep this minimal so you can focus on the persistence flow, not app scaffolding.
mkdir haystack-state-demo
cd haystack-state-demo
npm init -y
npm install haystack openai dotenv
npm install -D typescript tsx @types/node
- •Set up your TypeScript config and environment file.
The important part is using ESM so the Haystack imports work cleanly in Node.
{
"name": "haystack-state-demo",
"type": "module",
"scripts": {
"start": "tsx src/index.ts"
},
"dependencies": {
"dotenv": "^16.4.5",
"haystack": "^1.0.0",
"openai": "^4.0.0"
},
"devDependencies": {
"@types/node": "^22.0.0",
"tsx": "^4.16.0",
"typescript": "^5.5.0"
}
}
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_key_here
EOF
cat > tsconfig.json << 'EOF'
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"strict": true,
"skipLibCheck": true,
"outDir": "./dist"
},
"include": ["src/**/*.ts"]
}
EOF
- •Build a simple persistent state object and save/load helpers.
In practice, you usually persist conversation history plus any agent metadata you need to reconstruct context after restart.
// src/state.ts
import { readFile, writeFile } from 'node:fs/promises';
export type AgentState = {
sessionId: string;
messages: { role: 'user' | 'assistant'; content: string }[];
};
const STATE_FILE = './agent-state.json';
export async function loadState(): Promise<AgentState> {
try {
const raw = await readFile(STATE_FILE, 'utf8');
return JSON.parse(raw) as AgentState;
} catch {
return { sessionId: crypto.randomUUID(), messages: [] };
}
}
export async function saveState(state: AgentState): Promise<void> {
await writeFile(STATE_FILE, JSON.stringify(state, null, 2), 'utf8');
}
- •Wire the state into a Haystack-powered chat loop.
The key idea is to rebuild the prompt from persisted messages before each call, then append the new assistant response back into the same state file.
// src/index.ts
import 'dotenv/config';
import OpenAI from 'openai';
import { loadState, saveState } from './state.js';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function main() {
const state = await loadState();
const userInput = process.argv.slice(2).join(' ') || 'Summarize what you remember about me.';
state.messages.push({ role: 'user', content: userInput });
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{
role: 'system',
content:
'You are a helpful assistant in a persistent session. Use prior messages as memory.'
},
...state.messages.map((m) => ({ role: m.role, content: m.content }))
]
});
const reply = response.choices[0]?.message?.content ?? '';
console.log(`Session: ${state.sessionId}`);
console.log(`Assistant: ${reply}`);
state.messages.push({ role: 'assistant', content: reply });
await saveState(state);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •Run it twice to confirm the second run sees prior context.
The first run creates the file; the second run reuses it and should answer with awareness of earlier turns.
npm run start -- "My name is Dana and I work in claims."
npm run start -- "What is my name and what do I work on?"
Testing It
Open agent-state.json after the first run and confirm that both the user message and assistant reply were written to disk. Then run the command again with a follow-up question; if persistence is working, the model should answer using earlier messages instead of treating it like a fresh conversation.
If you want a stricter test, delete agent-state.json, rerun once, then compare behavior before and after persistence exists.
Next Steps
- •Replace the flat JSON file with SQLite or Postgres once you need concurrency or multiple sessions.
- •Add message trimming so long-running sessions don’t blow past token limits.
- •Store structured agent metadata alongside messages, such as customer ID, workflow step, or tool outputs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit