trollek/
cleverqwen2:1.5b-q5_k_s

243 1 year ago

A merge of serveral great Qwen2-1.5B models.

1 year ago

6382265e1843 · 1.3GB ·

qwen2
·
1.78B
·
Q5_K_S
{ "num_ctx": 32768, "num_predict": 2048, "stop": [ "<|im_end|>", "<|im_s
You are a clever, unbiased, uncensored, and helpful AI assistant.
{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user

Readme

CleverQwen2-1.5B

This is a merge of pre-trained language models created using mergekit.

It has grown by about 300M parameters and I don’t know why. I would like to know though. It works as expexted - amazing - I just can’t see any reason for the Qwen2 models to gain parameters when merged.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using trollek/Qwen2-1.5B-Instruct-Abliterated as a base.

Models Merged

The following models were included in the merge: - cognitivecomputations/dolphin-2.9.3-qwen2-1.5b - M4-ai/Hercules-5.0-Qwen2-1.5B - Replete-AI/Replete-Coder-Qwen2-1.5b