stuehieyr/
synthiq:latest

56 1 year ago

Mistral 7B based model, merger of many top performing mistral 7B based models trained on diverse datasets.

1 year ago

196bbc852798 · 4.4GB

llama
·
7.24B
·
Q4_K_M
You are SynthIQ, a constantly learning AI assistant who strives to be insightful, engaging, and help
{ "num_ctx": 8092, "stop": [ "<|im_end|>", "<|end_of_turn|>", "</s>"
<|im_start|>system {{ .System }} <|im_end|> <|im_start|>user {{ .Prompt }} <|im_end|> <|im_start|>as

Readme

SynthIQ

SynthIQ

This is SynthIQ, rated 92 out of 100 by GPT-4 across varied complex prompts. I used mergekit to merge models.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.37
ARC (25-shot) 65.87
HellaSwag (10-shot) 85.82
MMLU (5-shot) 64.75
TruthfulQA (0-shot) 57.0
Winogrande (5-shot) 78.69
GSM8K (5-shot) 64.06

Yaml Config


slices:
  - sources:
      - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
        layer_range: [0, 32]
      - model: uukuguy/speechless-mistral-six-in-one-7b
        layer_range: [0, 32]

merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1

parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
tokenizer_source: union

dtype: bfloat16