DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.
76.7K Pulls 8 Tags Updated yesterday
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2.3M Pulls 5 Tags Updated 8 months ago
DeepSeek-V3 from Huggingface: Your powerful solution for handling complex requests and advanced coding tasks. Enhance your development workflow with state-of-the-art code assistance and intelligent problem-solving capabilities.
19K Pulls 1 Tag Updated 9 months ago
7,099 Pulls 2 Tags Updated 8 months ago
2,080 Pulls 5 Tags Updated 5 months ago
(Unsloth Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2,005 Pulls 3 Tags Updated 7 months ago
Merged Unsloth's Dynamic Quantization
1,322 Pulls 1 Tag Updated 6 months ago
DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.
1,250 Pulls 5 Tags Updated 6 months ago
deepseek-v3-0324-Quants. - Q2_K is the lowest here - quantized = round((original - zero_point) / scale)
975 Pulls 1 Tag Updated 6 months ago
895 Pulls 2 Tags Updated 8 months ago
ollama run deepseek-v3
792 Pulls 1 Tag Updated 7 months ago
DeepSeek-R1-0528 仍然使用 2024 年 12 月所发布的 DeepSeek V3 Base 模型作为基座,但在后训练过程中投入了更多算力,显著提升了模型的思维深度与推理能力。这个8B精馏版本编程能力都爆表!
752 Pulls 1 Tag Updated 4 months ago
This model has been developed based on DistilQwen2.5-DS3-0324-Series.
678 Pulls 7 Tags Updated 4 months ago
598 Pulls 1 Tag Updated 8 months ago
DeepSeep V3 from March 2025 Merged from Unsloth's HF - 671B params - Q8_0/713 GB & Q4_K_M/404 GB
586 Pulls 4 Tags Updated 6 months ago
dynamic quants from unsloth, merged
290 Pulls 1 Tag Updated 6 months ago
Latest DeepSeek_V3 model Q4
224 Pulls 1 Tag Updated 5 months ago
This model is a distilled version of Qwen/Qwen3-30B-A3B-Instruct designed to inherit the reasoning and behavioral characteristics of its much larger teacher model, deepseek-ai/DeepSeek-V3.1.
215 Pulls 2 Tags Updated 2 weeks ago
Single file version with (Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
85 Pulls 4 Tags Updated 7 months ago
This is a distill model that trained from the dataset of TieBa latest. Used about 8k data and think chain from DeepSeek-V3.
47 Pulls 1 Tag Updated 6 months ago