DeepSeek-V4-Flash is a preview of the DeepSeek-V4 series, a Mixture-of-Experts model with 284B total parameters and 13B activated, built for efficient reasoning across a 1M-token context window.
44.3K Pulls 1 Tag Updated 1 week ago
DeepSeek-V4-Pro is a frontier Mixture-of-Experts model with a 1M-token context window and three reasoning modes.
35.2K Pulls 1 Tag Updated 1 week ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
436.1K Pulls 3 Tags Updated 5 months ago
DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance.
156.7K Pulls 1 Tag Updated 4 months ago
DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.
668.4K Pulls 8 Tags Updated 7 months ago
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
84.6M Pulls 35 Tags Updated 10 months ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
3.8M Pulls 5 Tags Updated 1 year ago
DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.
4.1M Pulls 102 Tags Updated 2 years ago
An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
2.5M Pulls 64 Tags Updated 1 year ago
A strong, economical, and efficient Mixture-of-Experts language model.
1.1M Pulls 34 Tags Updated 1 year ago
An advanced language model crafted with 2 trillion bilingual tokens.
1.1M Pulls 64 Tags Updated 2 years ago
An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.
271.9K Pulls 7 Tags Updated 1 year ago
A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.
1.2M Pulls 5 Tags Updated 1 year ago
A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.
1.1M Pulls 15 Tags Updated 1 year ago
A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.
399.7K Pulls 9 Tags Updated 1 year ago
5,748 Pulls 1 Tag Updated 1 year ago
DeepSeek-R1-0528-Qwen3-8B
19 Pulls 1 Tag Updated yesterday
9 Pulls 1 Tag Updated 6 days ago
3 Pulls 1 Tag Updated 6 days ago
900 Pulls 2 Tags Updated 3 weeks ago