1.4M Downloads Updated 10 months ago
Updated 1 year ago
1 year ago
de58127cf267 · 86GB ·
The Mixtral large Language Models (LLM) are a set of pretrained generative Sparse Mixture of Experts.
mixtral:8x22bmixtral:8x7bollama run mixtral:8x22b
Mixtral 8x22B sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Mixtral 8x22B comes with the following strengths: