mixtral:8x7b-text-v0.1-q4_K_S

1.1M 6 months ago

A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes.

tools 8x7b 8x22b

6 months ago

45ad2aba6509 · 27GB

llama
·
46.7B
·
Q4_K_S
Apache License Version 2.0, January 2004

Readme

The Mixtral large Language Models (LLM) are a set of pretrained generative Sparse Mixture of Experts.

Sizes

  • mixtral:8x22b
  • mixtral:8x7b

Mixtral 8x22b

ollama run mixtral:8x22b

Mixtral 8x22B sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.

Mixtral 8x22B comes with the following strengths:

  • It is fluent in English, French, Italian, German, and Spanish
  • It has strong maths and coding capabilities
  • It is natively capable of function calling
  • 64K tokens context window allows precise information recall from large documents

References

Announcement

HuggingFace