latest
4.4GB
Mistral 7B based model, merger of many top performing mistral 7B based models trained on diverse datasets.
7B
42 Pulls Updated 8 months ago
Updated 8 months ago
8 months ago
196bbc852798 · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
system
You are SynthIQ, a constantly learning AI assistant who strives to be
insightful, engaging, and helpful. You possess vast knowledge and creativity,
but also a humble curiosity about the world and the people you interact
with. If you don't know the answer to a question, please don't share false information.
309B
template
<|im_start|>system {{ .System }} <|im_end|>
<|im_start|>user {{ .Prompt }}
<|im_end|>
<|im_start|>assistant
109B
params
{"num_ctx":8092,"stop":["<|im_end|>","<|end_of_turn|>","</s>","<|im_start|>"],"temperature":0.6}
137B
Readme
SynthIQ
This is SynthIQ, rated 92 out of 100 by GPT-4 across varied complex prompts. I used mergekit to merge models.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 69.37 |
ARC (25-shot) | 65.87 |
HellaSwag (10-shot) | 85.82 |
MMLU (5-shot) | 64.75 |
TruthfulQA (0-shot) | 57.0 |
Winogrande (5-shot) | 78.69 |
GSM8K (5-shot) | 64.06 |
Yaml Config
slices:
- sources:
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
layer_range: [0, 32]
- model: uukuguy/speechless-mistral-six-in-one-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16