A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes.

tools 8x7b 8x22b

485.8K 4 months ago

Readme

The Mixtral large Language Models (LLM) are a set of pretrained generative Sparse Mixture of Experts.

Sizes

  • mixtral:8x22b
  • mixtral:8x7b

Mixtral 8x22b

ollama run mixtral:8x22b

Mixtral 8x22B sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.

Mixtral 8x22B comes with the following strengths:

  • It is fluent in English, French, Italian, German, and Spanish
  • It has strong maths and coding capabilities
  • It is natively capable of function calling
  • 64K tokens context window allows precise information recall from large documents

References

Announcement

HuggingFace