82 Downloads Updated 3 days ago
ollama run robit/qwen3.5-9b-r7-research-vision:q4km
ollama launch claude --model robit/qwen3.5-9b-r7-research-vision:q4km
ollama launch codex --model robit/qwen3.5-9b-r7-research-vision:q4km
ollama launch opencode --model robit/qwen3.5-9b-r7-research-vision:q4km
ollama launch openclaw --model robit/qwen3.5-9b-r7-research-vision:q4km
Fine-tuned Qwen3.5-9B with distilled reasoning and full vision support. 883 tensors (427 text + 441 vision + 15 MTP) — vision tower preserved byte-for-byte from base via llama-export-lora merge.
<think> blockstool_calls via Ollama /api/chat| Benchmark | Score |
|---|---|
| Diverse stochastic eval (38 tests) | 86.8% |
| Vision probe (rendered text) | PASS (reads “42” from image) |
| Tool calling | PASS (structured tool_calls) |
| Thinking | PASS (produces thinking field) |
llama-export-lora to preserve visionconvert_lora_to_gguf -> llama-export-lora into base Q4_K_M GGUF. All 441 vision tensors + 15 MTP tensors unchanged.ollama run robit/qwen3.5-9b-r7-research-vision:q4km
IMG64=$(base64 -w0 path/to/image.jpg)
curl -s http://localhost:11434/api/chat \
-d '{"model":"robit/qwen3.5-9b-r7-research-vision:q4km","messages":[{"role":"user","content":"Describe this image.","images":["'"$IMG64"'"]}]}'
RENDERER qwen3.5 + PARSER qwen3.5 (enables tool calling + vision)num_ctx 262144 (max context)temperature 0.6, top_p 0.95stop "<|im_end|>"Derived from Qwen3.5-9B (Apache 2.0). Training data licenses vary by source.