How to Fix 'embedding dimension mismatch' in LangGraph (TypeScript)
Opening
embedding dimension mismatch means the vector you’re trying to store or compare has a different length than the index expects. In LangGraph TypeScript apps, this usually shows up when a node writes embeddings to a memory store, vector store, or retrieval layer that was initialized with a different embedding model than the one currently producing vectors.
The failure is almost always deterministic: same code path, same wrong dimensions. You’ll typically hit it after switching embedding models, changing providers, or reusing an existing index created with an older model.
The Most Common Cause
The #1 cause is mixing embedding models with different output sizes.
A classic example: you created your vector store with OpenAI text-embedding-3-small and later started generating embeddings with text-embedding-3-large, or vice versa. LangGraph isn’t the root problem here; it just passes vectors into the store, and the store rejects them.
| Broken pattern | Fixed pattern |
|---|---|
| Store created for one embedding size, app later generates another | Use the same embedding model for both indexing and querying |
// ❌ Broken
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const indexEmbeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small", // 1536 dims
});
const queryEmbeddings = new OpenAIEmbeddings({
model: "text-embedding-3-large", // 3072 dims
});
const docs = ["customer policy", "claims workflow"];
const store = await MemoryVectorStore.fromTexts(docs, {}, indexEmbeddings);
// Later in a LangGraph node:
const queryVector = await queryEmbeddings.embedQuery("policy lookup");
// Runtime error from vector store / index:
// Error: embedding dimension mismatch: expected 1536, got 3072
await store.similaritySearchVectorWithScore(queryVector, 3);
// ✅ Fixed
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const docs = ["customer policy", "claims workflow"];
const store = await MemoryVectorStore.fromTexts(docs, {}, embeddings);
const queryVector = await embeddings.embedQuery("policy lookup");
await store.similaritySearchVectorWithScore(queryVector, 3);
If you’re using LangGraph state nodes, keep the embedding instance centralized. Don’t create one embedding client in your ingestion graph and another in your retrieval graph unless you’ve verified they return the same dimension.
Other Possible Causes
1) Reusing an old vector index after changing models
If your persisted index was built with one dimension and your current app uses another, you’ll get the mismatch immediately.
// Config changed from:
model: "text-embedding-ada-002" // 1536 dims
// To:
model: "text-embedding-3-large" // 3072 dims
Fix:
- •Rebuild the entire index
- •Or keep the old embedding model until migration is complete
2) Mixing providers that don’t share dimensions
Different providers often use different output sizes even if they solve the same task.
import { CohereEmbeddings } from "@langchain/cohere";
import { OpenAIEmbeddings } from "@langchain/openai";
// ❌ Indexed with Cohere...
const indexer = new CohereEmbeddings({ model: "embed-english-v3.0" });
// ...queried with OpenAI.
const retriever = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
Fix:
- •Use one provider end-to-end for a given index
- •If you must migrate providers, reindex all documents
3) Hardcoded vector arrays in tests or mocks
A lot of TypeScript projects fail in local tests because someone mocked embeddings with a random array length.
// ❌ Mock returns wrong length
const fakeEmbedding = Array(768).fill(0);
// Store expects 1536 based on production config
Fix:
- •Make mocks match production dimensions exactly
- •Pull expected dimension from config instead of hardcoding it
4) Wrong collection/index configuration in your vector DB
Some databases enforce dimension at collection creation time.
// Pinecone example conceptually:
// collection created for 1536-dim vectors,
// but app now sends 3072-dim vectors.
Fix:
- •Delete and recreate the collection with the correct dimension
- •Verify schema before writing any data
How to Debug It
- •
Print the actual vector length at runtime
Add logging right before insertion or search.
const vector = await embeddings.embedQuery("test"); console.log("vector length:", vector.length);If this number doesn’t match your index dimension, you found the problem.
- •
Check how the index was created
Look at your vector DB setup code and migration history.
- •What embedding model built the current collection?
- •What was its output dimension?
- •Has that changed since deployment?
- •
Search for multiple embedding clients
In LangGraph projects, this often happens across nodes.
- •One node uses
OpenAIEmbeddings - •Another uses
CohereEmbeddings - •A third uses a mocked local embedder
Make sure every node touching the same store uses the same embedding contract.
- •One node uses
- •
Inspect the exact stack trace
The error usually comes from the underlying vector layer, not LangGraph itself.
You’ll often see something like:
- •
Error: embedding dimension mismatch - •
PineconeError: Vector dimension 3072 does not match the dimension of the index 1536 - •
BadRequestError: expected vector of size X but got Y
That tells you whether this is a generation issue or a storage/index issue.
- •
Prevention
- •
Centralize embeddings in one module and import that everywhere.
Don’t instantiate ad hoc embedding clients inside LangGraph nodes unless you absolutely need to.
- •
Treat embedding model changes as schema migrations.
If you change models, rebuild indexes and update tests together.
- •
Assert dimensions in code before writes.
Fail fast with a clear message instead of letting the vector DB throw later.
function assertDimension(vector: number[], expected: number) {
if (vector.length !== expected) {
throw new Error(
`Embedding dimension mismatch: expected ${expected}, got ${vector.length}`
);
}
}
If you hit this error in LangGraph TypeScript, start by checking one thing: are indexing and querying using exactly the same embedding model? In most cases, that’s the whole bug.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit