frob/ mixtao:7bx2-moe-v8.1-q8_0

1 1 week ago

ollama run frob/mixtao:7bx2-moe-v8.1-q8_0

Details

1 week ago

5900d7826ac7 · 14GB ·

llama
·
12.9B
·
Q8_0
{{ .System }} ### Instruction: {{ .Prompt }} ### Response:
You are a helpful AI assistant
{ "stop": [ "### Instruction:", "### Response:", "</s>" ] }

Readme

MixTAO-7Bx2-MoE

MixTAO-7Bx2-MoE is a Mixure of Experts (MoE). This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.

Source: https://huggingface.co/mixtao/MixTAO-7Bx2-MoE-v8.1

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 77.50
AI2 Reasoning Challenge (25-Shot) 73.81
HellaSwag (10-Shot) 89.22
MMLU (5-Shot) 64.92
TruthfulQA (0-shot) 78.57
Winogrande (5-shot) 87.37
GSM8k (5-shot) 71.11