Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
9.3M Pulls 58 Tags Updated yesterday
Alibaba's performant long context models for agentic and coding tasks.
450.8K Pulls 10 Tags Updated 1 week ago
Building upon the foundational models of the Qwen3 series, Qwen3 Embedding provides a comprehensive range of text embeddings models in various sizes
15.3K Pulls 12 Tags Updated 1 week ago
A new small reasoning model fine-tuned from the Qwen 2.5 3B Instruct model.
82.5K Pulls 5 Tags Updated 9 months ago
Alibaba's text reranking model.Qwen3-Reranker-8B has the following features: Model Type: Text Reranking. Supported Languages: 100+ Languages. Number of Paramaters: 8B. Context Length: 32k.
188.8K Pulls 5 Tags Updated 3 months ago
Qwen3, but Josiefied and uncensored.
65.7K Pulls 47 Tags Updated 3 months ago
61.7K Pulls 74 Tags Updated 1 month ago
Alibaba's text embedding model.Qwen3-Embedding-0.6B has the following features: Model Type: Text Embedding Supported Languages: 100+ Languages Number of Paramaters: 0.6B Context Length: 32k Embedding Dimension: Up to 1024, supports user-defined output...
25.9K Pulls 2 Tags Updated 3 months ago
This is the abliterated version of Qwen3, which is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
22.8K Pulls 6 Tags Updated 5 months ago
Alibaba's text embedding model.Qwen3-Embedding-8B has the following features: Model Type: Text Embedding Supported Languages: 100+ Languages Number of Paramaters: 8B Context Length: 32k Embedding Dimension: Up to 4096, supports user-defined output...
22.2K Pulls 4 Tags Updated 3 months ago
Alibaba's text embedding model.Qwen3-Embedding-4B has the following features: Model Type: Text Embedding Supported Languages: 100+ Languages Number of Paramaters: 4B Context Length: 32k Embedding Dimension: Up to 2560, supports user-defined output ...
12.1K Pulls 4 Tags Updated 3 months ago
qwen3:4b_4bit_不思考版,不浪费算力去思考,直接回答。
9,434 Pulls 4 Tags Updated 5 months ago
Qwen3-30B-A3B-Instruct-2507 has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers
9,415 Pulls 1 Tag Updated 2 months ago
Alibaba's text reranking model.Qwen3-Reranker-0.6B has the following features: Model Type: Text Reranking Supported Languages: 100+ Languages Number of Paramaters: 0.6B Context Length: 32k
8,382 Pulls 2 Tags Updated 3 months ago
Quantized version of Qwen3 models (4B,8B,14B,32B, 30B-moe) optimized for tool usage in Cline / Roo Code and solving Complex Problems.
7,810 Pulls 8 Tags Updated 5 months ago
Alibaba's text reranking model.Qwen3-Reranker-4B has the following features: Model Type: Text Reranking Supported Languages: 100+ Languages Number of Paramaters: 4B Context Length: 32k...
4,801 Pulls 3 Tags Updated 3 months ago
3,235 Pulls 4 Tags Updated 5 months ago
Unsloth Dynamic 2.0 Quants achieves 1M tokens & superior accuracy & SOTA quantization performance. Select UD-IQ3_XXS for 16GB VRAM, UD-Q4_K_XL for 24GB VRAM, or UD-Q5_K_XL/UD-Q6_K_XL for 32GB VRAM.
3,044 Pulls 5 Tags Updated 1 month ago
DeepSeek-R1-0528-Qwen3-8B-IQ4_NL
2,556 Pulls 1 Tag Updated 4 months ago
Qwen3-Coder featuring the following key enhancements: Significant Performance, Long-context Capabilities, Agentic Coding.
2,425 Pulls 9 Tags Updated 1 month ago