Mistral 7B based model, merger of many top performing mistral 7B based models trained on diverse datasets.

7B

42 Pulls Updated 8 months ago

Readme

SynthIQ

SynthIQ

This is SynthIQ, rated 92 out of 100 by GPT-4 across varied complex prompts. I used mergekit to merge models.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.37
ARC (25-shot) 65.87
HellaSwag (10-shot) 85.82
MMLU (5-shot) 64.75
TruthfulQA (0-shot) 57.0
Winogrande (5-shot) 78.69
GSM8K (5-shot) 64.06

Yaml Config


slices:
  - sources:
      - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
        layer_range: [0, 32]
      - model: uukuguy/speechless-mistral-six-in-one-7b
        layer_range: [0, 32]

merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1

parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
tokenizer_source: union

dtype: bfloat16