MoE model combining 4 of the best 7B models
2,241 Pulls Updated 13 months ago
Updated 13 months ago
13 months ago
8c32d3c4bb04 · 15GB
model
archllama
·
parameters24.2B
·
quantizationQ4_K_M
15GB
params
{
"stop": [
"<|im_end|>",
"<|im_start|>"
]
}
59B
template
<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>
111B
Readme
This is medium sized MoE model that combines, openchat/openchat-3.5-1210, beowolx/CodeNinja-1.0-OpenChat-7B, maywell/PiVoT-0.1-Starling-LM-RP and WizardLM/WizardMath-7B-V1.1.
I find this model to be very good. Lighter weight and better than Mixtral.
This is V2 of the model and features quants of the most recent updates (The Bloke’s quants are outdated).
Feel free to contact me if you have problems
Reddit: /u/spooknik | Discord: .spooknik