2,028 4 days ago

Qwen3.5-Claude-4.6-Opus-Reasoning-Distilled-v2; https://huggingface.co/Jackrong/; has vision properly merged and efficiently quantified.

vision tools thinking 4b 9b 27b
ollama run fredrezones55/qwen3.5-opus:27b

Details

4 days ago

914789b72c24 · 17GB ·

qwen35
·
27.4B
·
Q4_K_M
{ "presence_penalty": 1.5, "temperature": 1, "top_k": 20, "top_p": 0.95 }
{{ .Prompt }}

Readme

Qwen3.5 is a thinking, vision, and tooling base model; that means Qwen3.5 fine-tunes for Ollama should also have these traits. Until now! Taken from R&D of work porting qwen3-VL Jan fine-tunes (had to fix some issues in the pipeline) these qwen3.5 models have had gone through a transformation to properly handle Ollama’s “New Engine”’s quirks. [I have not, however noticed a structure issue {thinking breaks} and that would be more of a configuration problem] {I do not have 64GBs of RAM (in this economy!?) to potentially handle HF to Ollama via ‘Ollama create’, so I had to get handsy with the model blobs}


Need to get a better config for the parameters, I’m not sure if the defaults are right…

Models:

4B

9B - for general use

9B-tooling - for coding and precise tooling

27B


Log: [3/21/2026] - I’m shooked, there is a 27B model to also work on and merge. [fun…]

[3/25/2026] - 27B model added, will wait for the 30b MOE model.


Model Information and sources 27b model 9b model 4b model