DeepSeek-V4-Flash is a preview of the DeepSeek-V4 series, a Mixture-of-Experts model with 284B total parameters and 13B activated, built for efficient reasoning across a 1M-token context window.
59K Pulls 1 Tag Updated 2 weeks ago
DeepSeek-V4-Pro is a frontier Mixture-of-Experts model with a 1M-token context window and three reasoning modes.
49.2K Pulls 1 Tag Updated 2 weeks ago
DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance.
735K Pulls 1 Tag Updated 4 months ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
441.2K Pulls 3 Tags Updated 5 months ago
DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.
675.4K Pulls 8 Tags Updated 7 months ago
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
85.1M Pulls 35 Tags Updated 10 months ago
DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.
4.2M Pulls 102 Tags Updated 2 years ago
A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.
1.1M Pulls 15 Tags Updated 1 year ago
A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.
402K Pulls 9 Tags Updated 1 year ago
An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.
273.8K Pulls 7 Tags Updated 1 year ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
3.8M Pulls 5 Tags Updated 1 year ago
DeepSeek-R1-0528-Qwen3-8B
96 Pulls 1 Tag Updated 1 week ago
NovaForge AI – DeepSeek Coder 6.7B Pro is a professional-grade coding AI built for production-level development.
2,094 Pulls 1 Tag Updated 4 months ago
Huggingface link - https://huggingface.co/iradukunda-dev/law-finetuned-DeepSeek-R1-Distill-Qwen-7B
360 Pulls 1 Tag Updated 4 months ago
SmallCoder is a compact reasoning-focused coding model, fine-tuned from DeepSeek-R1 1.5B using a code dataset that includes step-by-step reasoning.
227 Pulls 1 Tag Updated 2 months ago
Based on DeepSeek R1 because OpenCode tries to verify on the registry for tool compatibility
190 Pulls 1 Tag Updated 2 months ago
Senior Go & SpecKit engineering agent powered by DeepSeek-v3.1 671B, optimized for idiomatic development and deterministic BDD testing.
58 Pulls 1 Tag Updated 1 month ago
This is a brand new Mixture of Export (MoE) model from DeepSeek, specializing in coding instructions. (quantized IQ4_XS)
11.3K Pulls 3 Tags Updated 3 months ago
46 Pulls 2 Tags Updated 2 months ago
DeepSeek-R1-0528-Qwen3-8B-IQ4_NL
3,724 Pulls 1 Tag Updated 11 months ago