A high-quality Mixture of Experts (MoE) model with open weights by Mistral AI.
197 Pulls Updated 7 months ago
Updated 7 months ago
7 months ago
c50d8f6bf633 · 8.9GB
model
archllama
·
parameters12.9B
·
quantizationQ5_K_S
8.9GB
params
{
"stop": [
"[INST]",
"[/INST]"
]
}
30B
template
"[INST] {{ .System }} {{ .Prompt }} [/INST]"
44B
Readme
The Mixtral-7Bx2 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
@HuggingFace https://huggingface.co/ManniX-ITA/Mixtral_7Bx2_MoE-GGUF