1.3M Downloads Updated 8 months ago
Updated 8 months ago
8 months ago
68f12ddfe676 · 32GB
The Mixtral large Language Models (LLM) are a set of pretrained generative Sparse Mixture of Experts.
mixtral:8x22b
mixtral:8x7b
ollama run mixtral:8x22b
Mixtral 8x22B sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Mixtral 8x22B comes with the following strengths: