277 1 year ago

A high-quality Mixture of Experts (MoE) model with open weights by Mistral AI.

Models

View all →

Readme

The Mixtral-7Bx2 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

@HuggingFace https://huggingface.co/ManniX-ITA/Mixtral_7Bx2_MoE-GGUF