437 1 month ago

An attempt to compress Qwen3.5 into 500M and 1.5B parameters.

tools thinking 500m 1.5b
ollama run reecdev/tiny3.5:500m

Applications

Claude Code
Claude Code ollama launch claude --model reecdev/tiny3.5:500m
OpenClaw
OpenClaw ollama launch openclaw --model reecdev/tiny3.5:500m
Hermes Agent
Hermes Agent ollama launch hermes --model reecdev/tiny3.5:500m
Codex
Codex ollama launch codex --model reecdev/tiny3.5:500m
OpenCode
OpenCode ollama launch opencode --model reecdev/tiny3.5:500m

Models

View all →

Readme

Tiny3.5

An attempt to compress Qwen3.5 into 500M and 1.5B parameters.

What is this?

Tiny3.5 is my community effort to create tiny and more efficient versions of Qwen3.5. The strengths of Tiny3.5 include very low inference latency, minimal overthinking, and being able to run on much weaker hardware. However, it’s important to realize that Tiny3.5 is sub-2B parameters. Don’t expect a 99% score on every single benchmark.

How is this better than Qwen3.5?

Tiny3.5 uses many techniques to produce better efficiency than Qwen3.5 in many scenarios. We use multi-shot distillation to filter out pointless reasoning loops and improve the overall quality of responses.

Can I create my own model using the Tiny3.5 dataset?

Absolutely! Our distillation dataset is open-source, and the code used to create it alongside a copy of the dataset is available on our GitHub: https://github.com/reecdev/tiny3.5

SEE HF AND GITHUB FOR MORE DETAILS:

https://github.com/reecdev/tiny3.5/

https://huggingface.co/reecdev/Tiny3.5-Coder-500M/