82 Downloads Updated 1 year ago
Credits to the user https://huggingface.co/zhengr for the model. I chose to use this for my local ollama for its leaderboard score (given below).
https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF
https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1
MixTAO-7Bx2-MoE is a Mixure of Experts (MoE). This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 77.50 |
AI2 Reasoning Challenge (25-Shot) | 73.81 |
HellaSwag (10-Shot) | 89.22 |
MMLU (5-Shot) | 64.92 |
TruthfulQA (0-shot) | 78.57 |
Winogrande (5-shot) | 87.37 |
GSM8k (5-shot) | 71.11 |