A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes.
tools
8x7b
8x22b
492.4K Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
e8479ee1cb51 · 80GB
model
archllama
·
parameters141B
·
quantizationQ4_0
80GB
params
{
"stop": [
"[INST]",
"[/INST]"
]
}
30B
template
{{- if .Messages }}
{{- range $index, $_ := .Messages }}
{{- if eq .Role "user" }}
{{- if and (or (e
936B
license
Apache License
Version 2.0, January 2004
11kB
Readme
The Mixtral large Language Models (LLM) are a set of pretrained generative Sparse Mixture of Experts.
Sizes
mixtral:8x22b
mixtral:8x7b
Mixtral 8x22b
ollama run mixtral:8x22b
Mixtral 8x22B sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Mixtral 8x22B comes with the following strengths:
- It is fluent in English, French, Italian, German, and Spanish
- It has strong maths and coding capabilities
- It is natively capable of function calling
- 64K tokens context window allows precise information recall from large documents