82 1 year ago

🤗 zhengr/MixTAO-7Bx2-MoE-v8.1

1 year ago

5ce295349de7 · 7.8GB

llama
·
12.9B
·
Q4_K_M
{ "num_ctx": 32768, "stop": [ "### Response:", "### Instruction:", "
{{ if .System }}### Instruction: {{ .System }}{{ end }} {{ if .Prompt }}### Input: {{ .Prompt }}{{ e

Readme

zhengr/MixTAO-7Bx2-MoE-v8.1

Credits to the user https://huggingface.co/zhengr for the model. I chose to use this for my local ollama for its leaderboard score (given below).

Original GGUF

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 77.50
AI2 Reasoning Challenge (25-Shot) 73.81
HellaSwag (10-Shot) 89.22
MMLU (5-Shot) 64.92
TruthfulQA (0-shot) 78.57
Winogrande (5-shot) 87.37
GSM8k (5-shot) 71.11