Qwen3.6 delivers substantial upgrades in agentic coding and thinking preservation than previous Qwen models.
836K Pulls 22 Tags Updated 1 week ago
Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.
8.4M Pulls 58 Tags Updated 1 month ago
Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.
1.2M Pulls 4 Tags Updated 2 months ago
The first installment in the Qwen3-Next series with strong performance in terms of both parameter efficiency and inference speed.
536.5K Pulls 10 Tags Updated 4 months ago
The most powerful vision-language model in the Qwen model family to date.
3.6M Pulls 59 Tags Updated 6 months ago
Alibaba's performant long context models for agentic and coding tasks.
5.2M Pulls 10 Tags Updated 7 months ago
Building upon the foundational models of the Qwen3 series, Qwen3 Embedding provides a comprehensive range of text embeddings models in various sizes
1.8M Pulls 12 Tags Updated 7 months ago
Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.
1.9M Pulls 17 Tags Updated 11 months ago
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
28.3M Pulls 58 Tags Updated 6 months ago
Qwen2.5 models are pretrained on Alibaba's latest large-scale dataset, encompassing up to 18 trillion tokens. The model supports up to 128K tokens and has multilingual support.
29.2M Pulls 133 Tags Updated 1 year ago
The latest series of Code-Specific Qwen models, with significant improvements in code generation, code reasoning, and code fixing.
15.1M Pulls 199 Tags Updated 11 months ago
Qwen2 is a new series of large language models from Alibaba group
5.8M Pulls 97 Tags Updated 1 year ago
Qwen 1.5 is a series of large language models by Alibaba Cloud spanning from 0.5B to 110B parameters
6.6M Pulls 379 Tags Updated 2 years ago
Qwen2 Math is a series of specialized math language models built upon the Qwen2 LLMs, which significantly outperforms the mathematical capabilities of open-source models and even closed-source models (e.g., GPT4o).
1M Pulls 52 Tags Updated 1 year ago
QwQ is the reasoning model of the Qwen series.
2.2M Pulls 8 Tags Updated 1 year ago
A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.
1.2M Pulls 5 Tags Updated 1 year ago
CodeQwen1.5 is a large language model pretrained on a large amount of code data.
1M Pulls 30 Tags Updated 1 year ago
A new small reasoning model fine-tuned from the Qwen 2.5 3B Instruct model.
242.1K Pulls 5 Tags Updated 1 year ago
MutualistLLM is a local model for deconstructing reactionary narratives and capitalist ideology, offering grounded, coherent responses rooted in anarchist and socialist theory.
36 Pulls 1 Tag Updated 3 months ago
2nd gen OmniCoder, fine-tuned from Qwen3.5-9B. Trains on assistant tokens only (unlike v1): no more repetition loops, stable tool-calling in long agentic sessions. With a better prompt in general.
205 Pulls 1 Tag Updated 4 days ago