A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
3M Pulls 5 Tags Updated 11 months ago
DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.
207K Pulls 8 Tags Updated 2 months ago
DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance.
4,665 Pulls 1 Tag Updated 1 week ago
7,126 Pulls 2 Tags Updated 11 months ago
2,835 Pulls 5 Tags Updated 8 months ago
(Unsloth Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2,019 Pulls 3 Tags Updated 10 months ago
Merged Unsloth's Dynamic Quantization
1,356 Pulls 1 Tag Updated 8 months ago
DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.
1,273 Pulls 5 Tags Updated 8 months ago
deepseek-v3-0324-Quants. - Q2_K is the lowest here - quantized = round((original - zero_point) / scale)
1,041 Pulls 1 Tag Updated 8 months ago
902 Pulls 2 Tags Updated 10 months ago
This model is a distilled version of Qwen/Qwen3-30B-A3B-Instruct designed to inherit the reasoning and behavioral characteristics of its much larger teacher model, deepseek-ai/DeepSeek-V3.1.
858 Pulls 2 Tags Updated 3 months ago
DeepSeek-R1-0528 仍然使用 2024 年 12 月所发布的 DeepSeek V3 Base 模型作为基座,但在后训练过程中投入了更多算力,显著提升了模型的思维深度与推理能力。这个8B精馏版本编程能力都爆表!
829 Pulls 1 Tag Updated 6 months ago
This model has been developed based on DistilQwen2.5-DS3-0324-Series.
820 Pulls 7 Tags Updated 7 months ago
ollama run deepseek-v3
805 Pulls 1 Tag Updated 10 months ago
625 Pulls 1 Tag Updated 11 months ago
DeepSeep V3 from March 2025 Merged from Unsloth's HF - 671B params - Q8_0/713 GB & Q4_K_M/404 GB
606 Pulls 4 Tags Updated 8 months ago
dynamic quants from unsloth, merged
291 Pulls 1 Tag Updated 8 months ago
Latest DeepSeek_V3 model Q4
229 Pulls 1 Tag Updated 8 months ago
Single file version with (Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
90 Pulls 4 Tags Updated 9 months ago
This is not the ablation version. DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode.
88 Pulls 3 Tags Updated 3 months ago