How to Fix 'embedding dimension mismatch during development' in AutoGen (Python)
What the error means
embedding dimension mismatch during development usually means your vector store, embedder, and stored documents do not agree on the embedding size. In AutoGen workflows, this shows up when you switch models, reuse an old index, or mix embeddings from different providers.
The failure often appears during retrieval setup with classes like ChromaDBVectorMemory, QdrantMemory, MongoDBAtlasVectorSearchMemory, or custom memory adapters built on autogen.agentchat.contrib.capabilities.
The Most Common Cause
The #1 cause is changing the embedding model after data has already been indexed.
If you indexed documents with text-embedding-ada-002 and later query with text-embedding-3-large, or you moved from one local model to another, the vector store still contains old vectors with a different dimension. AutoGen then fails when it tries to compare query embeddings against stored embeddings.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Reuse an existing collection/index after changing embedding model | Rebuild the collection/index when the embedding model changes |
| Let query embeddings come from one model and stored vectors from another | Pin one embedding model for the whole project |
# BROKEN
from autogen.agentchat.contrib.capabilities import TextMessageCompressor
from autogen.agentchat.contrib.vector_memory import ChromaDBVectorMemory
from autogen.oai import OpenAIWrapper
# Old index was created with a different embedding model.
memory = ChromaDBVectorMemory(
client=None,
collection_name="support_docs",
embedding_model="text-embedding-ada-002",
)
# Later, someone changed only this:
llm_config = {
"config_list": [{"model": "gpt-4o-mini"}],
}
# Querying now can trigger:
# ValueError: Embedding dimension mismatch: expected 1536, got 3072
# FIXED
from autogen.agentchat.contrib.vector_memory import ChromaDBVectorMemory
# Use one embedding model consistently.
memory = ChromaDBVectorMemory(
client=None,
collection_name="support_docs_v2", # new collection for new dimensions
embedding_model="text-embedding-3-small",
)
# If you change embedding_model later, create a new collection
# or delete/rebuild the old one.
If you need to keep the same collection name, delete the old vectors first and re-ingest everything with the new embedder.
Other Possible Causes
1. Mixing local and hosted embedders
A common mistake is generating documents with one embedder and queries with another.
# Broken: document vectors from SentenceTransformer,
# query vectors from OpenAI embeddings.
doc_embedder = "all-MiniLM-L6-v2" # 384 dims
query_embedder = "text-embedding-3-large" # 3072 dims
Fix: use one embedder end-to-end, or keep separate stores per embedder.
2. Reusing persisted data after dependency changes
If you upgraded packages or changed your embedding library version, persisted vectors may no longer match expectations.
# Example symptom:
# ValueError: shapes (1,3072) and (1,1536) not aligned
Fix: wipe the persisted store and rebuild it after dependency upgrades that affect embeddings.
3. Wrong vector store configuration
Some stores require explicit dimension settings. If the index was created with one dimension and your current embedder outputs another, inserts or searches fail.
# Qdrant example: index configured for 1536-dim vectors.
vector_size = 1536
# But current embedder returns 3072 dims.
Fix: recreate the collection/index with the correct vector size before inserting data.
4. Hidden mismatch in custom memory code
If you wrote a custom memory adapter around AutoGen, you may be normalizing inputs differently between ingestion and search.
def embed_documents(texts):
return doc_model.encode(texts)
def embed_query(text):
return query_model.encode([text]) # different model here is enough to break it
Fix: inject a single embedder instance into both paths.
How to Debug It
- •
Print the embedding dimension at both write time and read time.
Add logging around your embed call:
vec = embedder.embed_query("hello") print(len(vec))If write-time and query-time lengths differ, that is your bug.
- •
Check which AutoGen memory class is failing.
Look at the stack trace for classes like:
- •
ChromaDBVectorMemory - •
QdrantMemory - •
MilvusMemory - •custom subclasses of
BaseChatMessageHistory
The failing class tells you whether this happens on insert or retrieval.
- •
- •
Inspect persisted collections or indexes.
Compare stored vector dimensions with current output dimensions.
For example:
- •old store:
1536 - •current embedder:
3072
If those numbers differ, rebuild the store.
- •old store:
- •
Temporarily delete all persisted state and retest.
If the error disappears after clearing:
- •Chroma persistence directory
- •Qdrant collection
- •FAISS index file
then you are dealing with stale vectors, not an AutoGen bug.
Prevention
- •Pin one embedding model per project and write it down in config.
- •Version your vector stores by embedding model name and dimension, for example
kb_text_embedding_3_small_v1. - •Rebuild indexes whenever you change:
- •embedding provider
- •model name
- •dependency version that affects embeddings
If you want fewer production surprises, treat embeddings like schema migrations. Changing them without rebuilding storage is how you get this exact mismatch error in AutoGen.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit