*TEXT-ONLY* Unsloth Quantization of Qwen3.5:9B
504 Pulls 1 Tag Updated 1 month ago
qwen3.5:9b
151 Pulls 1 Tag Updated 1 month ago
Built from vaultbox/qwen3.5-uncensored:9b with Baburao Ganpatrao Apte (Babu Bhaiya) persona (just for fun)
208 Pulls 1 Tag Updated 3 days ago
2 Pulls 1 Tag Updated yesterday
Qwen 3.5 9B quantized by BatiAI. 12.5 t/s on 16GB Mac. Best for tool calling.
1,971 Pulls 3 Tags Updated 1 week ago
No content review, no restrictions - Local model based on qwen3.5 (for learning purposes only)
1,239 Pulls 1 Tag Updated 2 weeks ago
第二代 OmniCoder,基于 Qwen3.5-9B 微调。仅对助手 token 进行训练(与 v1 不同):不再出现重复循环,在长时间的智能体会话中能够稳定进行工具调用。 原文链接:https://hf.co/Tesslate/OmniCoder-2-9B
621 Pulls 1 Tag Updated 2 weeks ago
726 Pulls 1 Tag Updated 1 week ago
456 Pulls 1 Tag Updated 2 weeks ago
Fine-tuned Qwen3.5-9B with distilled reasoning and full vision support. 883 tensors — vision tower preserved byte-for-byte from base. R5 was the first vision-capable distilled model.
301 Pulls 1 Tag Updated 2 weeks ago
Fine-tuned Qwen3.5-9B with distilled reasoning and full vision support. 883 tensors (427 text + 441 vision + 15 MTP) — vision tower preserved byte-for-byte from base via llama-export-lora merge.
234 Pulls 1 Tag Updated 2 weeks ago
26.3.7. Qwen3.5-9B-Q4 Update: This model introduces higher-quality reasoning trajectories across domains such as science, instruction-following, and mathematics.
22.2K Pulls 1 Tag Updated 1 month ago
Fine-tuned Qwen3.5-9B with distilled reasoning from research-backed datasets. Trained via LoRA SFT with an additive data strategy that preserves base model capabilities while improving instruction following and reasoning.
65 Pulls 1 Tag Updated 2 weeks ago
9B coding agent based on Qwen3.5-9B, fine-tuned on 425K real agentic traces from Claude Opus 4.6, GPT-5.4, and Gemini 3.1. Reads before it writes, traces bugs to the root cause, doesn't clobber your existing code.
10K Pulls 3 Tags Updated 1 month ago
11.6K Pulls 1 Tag Updated 1 month ago
5,130 Pulls 1 Tag Updated 1 month ago
26.3.18. Ver.2 update: This iteration is powered by 14,000+ premium Claude 4.6 Opus-style general reasoning samples, with a major focus on achieving massive gains in reasoning efficiency while actively improving peak accuracy.
5,976 Pulls 1 Tag Updated 1 month ago
2nd gen OmniCoder, fine-tuned from Qwen3.5-9B. Trains on assistant tokens only (unlike v1): no more repetition loops, stable tool-calling in long agentic sessions. Original: https://hf.co/Tesslate/OmniCoder-2-9B
3,119 Pulls 2 Tags Updated 1 month ago
4,333 Pulls 1 Tag Updated 1 month ago
Fine-tuned Qwen3.5-9B with distilled reasoning from research-backed datasets. R5 was the first round to use production-quality data sources (Bespoke-Stratos, Tulu-3, SlimOrca) and achieved 84.2% on diverse eval — surpassing the base model.
20 Pulls 1 Tag Updated 2 weeks ago