87 8 months ago

A new small reasoning model fine-tuned from the Qwen 2.5 3B Instruct model. I-Quants models.

8 months ago

c70c9f60d82d · 2.4GB

qwen2
·
3.4B
·
Q5_K_S
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} <|im_start|>{{ .R
You are a helpful assistant.
Qwen RESEARCH LICENSE AGREEMENT Qwen RESEARCH LICENSE AGREEMENT Release Date: September 19, 2024 By

Readme

image.png

A new model fine-tuned from the Qwen2.5-3b-Instruct model.

  • Quantization from fp32
  • Using i-matrix calibration_datav3.txt

SmallThinker is designed for the following use cases:

  • Edge Deployment: Its small size makes it ideal for deployment on resource-constrained devices.
  • Draft Model for QwQ-32B-Preview: SmallThinker can serve as a fast and efficient draft model for the larger QwQ-32B-Preview model, yielding a 70% speedup.

For achieving reasoning capabilities, it’s crucial to generate long chains of COT reasoning. Therefore, based on QWQ-32B-Preview, the authors used various synthetic techniques(such as personahub) to create the QWQ-LONGCOT-500K dataset. Compared to other similar datasets, over 75% of the author’s samples have output tokens exceeding 8K. To encourage research in the open-source community, the dataset was also made publicly available.

References

Hugging Face