DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
61.9M Pulls 35 Tags Updated 2 months ago
DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode.
45.8K Pulls 4 Tags Updated 2 weeks ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2.3M Pulls 5 Tags Updated 8 months ago
DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.
1.2M Pulls 102 Tags Updated 1 year ago
An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
1.1M Pulls 64 Tags Updated 1 year ago
An advanced language model crafted with 2 trillion bilingual tokens.
202.8K Pulls 64 Tags Updated 1 year ago
A strong, economical, and efficient Mixture-of-Experts language model.
185.8K Pulls 34 Tags Updated 1 year ago
An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.
66.2K Pulls 7 Tags Updated 1 year ago
A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.
597K Pulls 15 Tags Updated 5 months ago
A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.
327.6K Pulls 5 Tags Updated 7 months ago
A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.
106.6K Pulls 9 Tags Updated 6 months ago
5,606 Pulls 1 Tag Updated 7 months ago
5.4M Pulls 1 Tag Updated 7 months ago
DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.
564.1K Pulls 55 Tags Updated 3 months ago
This is a modified model that adds support for autonomous coding agents like Cline
554.6K Pulls 6 Tags Updated 6 months ago
Unsloth's DeepSeek-R1 , I just merged the thing and uploaded it here. This is the full 671b model. MoE Bits:1.58bit Type:UD-IQ1_S Disk Size:131GB Accuracy:Fair Details:MoE all 1.56bit. down_proj in MoE mixture of 2.06/1.56bit
170.9K Pulls 2 Tags Updated 7 months ago
Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.
101K Pulls 1 Tag Updated 7 months ago
Merged GGUF Unsloth's DeepSeek-R1 671B 2.51bit dynamic quant
60.4K Pulls 1 Tag Updated 7 months ago
Merged GGUF Unsloth's DeepSeek-R1 671B 1.73bit dynamic quant
26.7K Pulls 1 Tag Updated 7 months ago
25.4K Pulls 4 Tags Updated 4 months ago