DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
74.2M Pulls 35 Tags Updated 5 months ago
DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.
198.9K Pulls 8 Tags Updated 2 months ago
DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.
2.2M Pulls 102 Tags Updated 1 year ago
A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.
627K Pulls 15 Tags Updated 8 months ago
A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.
152.2K Pulls 9 Tags Updated 9 months ago
An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.
89K Pulls 7 Tags Updated 1 year ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
57.6K Pulls 3 Tags Updated 3 weeks ago
DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance.
2,881 Pulls 1 Tag Updated 5 days ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2.9M Pulls 5 Tags Updated 11 months ago
5.4M Pulls 1 Tag Updated 10 months ago
DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.
606.5K Pulls 55 Tags Updated 6 months ago
Unsloth's DeepSeek-R1 , I just merged the thing and uploaded it here. This is the full 671b model. MoE Bits:1.58bit Type:UD-IQ1_S Disk Size:131GB Accuracy:Fair Details:MoE all 1.56bit. down_proj in MoE mixture of 2.06/1.56bit
170.9K Pulls 2 Tags Updated 10 months ago
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models.
118.7K Pulls 2 Tags Updated 10 months ago
Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.
101.4K Pulls 1 Tag Updated 10 months ago
Merged GGUF Unsloth's DeepSeek-R1 671B 2.51bit dynamic quant
60.5K Pulls 1 Tag Updated 10 months ago
28.9K Pulls 4 Tags Updated 7 months ago
Merged GGUF Unsloth's DeepSeek-R1 671B 1.73bit dynamic quant
26.7K Pulls 1 Tag Updated 10 months ago
DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen. With Tool Calling support.
26.2K Pulls 26 Tags Updated 10 months ago
Merged GGUF Unsloth's DeepSeek-R1 671B 2.22bit dynamic quant
5,867 Pulls 1 Tag Updated 10 months ago
5,659 Pulls 1 Tag Updated 10 months ago