392 Downloads Updated 2 months ago
Blog | Technical Report | AWS SageMaker | Atlas Embedding and Unstructured Data Analytics Platform
This model was presented in the paper Training Sparse Mixture Of Experts Text Embedding Models.
nomic-embed-text-v2-moe
is a SoTA multilingual MoE text embedding model that excels at multilingual retrieval:
Model | Params (M) | Emb Dim | BEIR | MIRACL | Pretrain Data | Finetune Data | Code |
---|---|---|---|---|---|---|---|
Nomic Embed v2 | 305 | 768 | 52.86 | 65.80 | ✅ | ✅ | ✅ |
mE5 Base | 278 | 768 | 48.88 | 62.30 | ❌ | ❌ | ❌ |
mGTE Base | 305 | 768 | 51.10 | 63.40 | ❌ | ❌ | ❌ |
Arctic Embed v2 Base | 305 | 768 | 55.40 | 59.90 | ❌ | ❌ | ❌ |
BGE M3 | 568 | 1024 | 48.80 | 69.20 | ❌ | ✅ | ❌ |
Arctic Embed v2 Large | 568 | 1024 | 55.65 | 66.00 | ❌ | ❌ | ❌ |
mE5 Large | 560 | 1024 | 51.40 | 66.50 | ❌ | ❌ | ❌ |
Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models’ increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn’t been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline at https://github.com/nomic-ai/contrastors.
The model can be used through SentenceTransformers and Transformers.
For best performance on GPU, please install
pip install torch transformers einops git+https://github.com/nomic-ai/megablocks.git
[!IMPORTANT] Important! The text prompt must include a task instruction prefix, instructing the model which task is being performed.
Please use search_query:
before your queries/questions, and search_document:
before your documents.
If using Transformers, make sure to prepend the task instruction prefix.
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nomic-ai/nomic-embed-text-v2-moe")
model = AutoModel.from_pretrained("nomic-ai/nomic-embed-text-v2-moe", trust_remote_code=True)
sentences = ['search_document: Hello!', 'search_document: ¡Hola!']
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
model.eval()
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings.shape)
# torch.Size([2, 768])
similarity = F.cosine_similarity(embeddings[0], embeddings[1], dim=0)
print(similarity)
# tensor(0.9118)
For truncation, you can trucate before applying normalization
+ embeddings = embeddings[:, :matryoshka_dim]
embeddings = F.normalize(embeddings, p=2, dim=1)
With SentenceTransformers, you can specify the prompt_name
as either "query"
or "passage"
, and the task instruction will be included automatically.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v2-moe", trust_remote_code=True)
sentences = ["Hello!", "¡Hola!"]
embeddings = model.encode(sentences, prompt_name="passage")
print(embeddings.shape)
# (2, 768)
similarity = model.similarity(embeddings[0], embeddings[1])
print(similarity)
# tensor([[0.9118]])
For truncation/Matryoshka embeddings, you can specify truncate_dim
and use the model similarly
model = SentenceTransformer("nomic-ai/nomic-embed-text-v2-moe", trust_remote_code=True, truncate_dim=256)
...
nomic-embed-text-v2-moe performance on BEIR and MIRACL compared to other open-weights embedding models:
nomic-embed-text-v2-moe performance on BEIR at 768 dimension and truncated to 256 dimensions:
trust_remote_code=True
when loading the model to use our custom architecture implementationFor more details, please check out the blog post and technical report.
If you find the model, dataset, or training code useful, please cite our work
@misc{nussbaum2025trainingsparsemixtureexperts,
title={Training Sparse Mixture Of Experts Text Embedding Models},
author={Zach Nussbaum and Brandon Duderstadt},
year={2025},
eprint={2502.07972},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.07972},
}