165 Downloads Updated 1 year ago
Gonzo-Chat-7B is a merged LLM based on Mistral v0.01 with a 8192 Context length that likes to chat, roleplay, work with agents, do some lite programming, and then beat the brakes off you in the back alley…
The BEST Open Source 7B Street Fighting LLM of 2024!!!

Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 66.63 |
| AI2 Reasoning Challenge (25-Shot) | 65.02 |
| HellaSwag (10-Shot) | 85.40 |
| MMLU (5-Shot) | 63.75 |
| TruthfulQA (0-shot) | 60.23 |
| Winogrande (5-shot) | 77.74 |
| GSM8k (5-shot) | 47.61 |
All contestents fought using the same LLM-Colosseum default settings. Each contestant fought 25 rounds with every other contestant.
https://github.com/OpenGenerativeAI/llm-colosseum


This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO as a base.
The following models were included in the merge: * Nondzu/Mistral-7B-Instruct-v0.2-code-ft * NousResearch/Nous-Hermes-2-Mistral-7B-DPO * cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
The following YAML configuration was used to produce this model:
models:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
# No parameters necessary for base model
- model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
parameters:
density: 0.53
weight: 0.4
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
parameters:
density: 0.53
weight: 0.3
- model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
parameters:
int8_mask: true
dtype: bfloat16