506 Pulls 2 Tags Updated 1 week ago
DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.
676.6K Pulls 55 Tags Updated 10 months ago
Many quantized GGUF versions of deepseek R1 abliterated (uncensored) with tools support
5,676 Pulls 8 Tags Updated 1 year ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
3,674 Pulls 5 Tags Updated 1 year ago
The mradermacher model of DeepSeek-r1 llama distilled with abliteration applied
2,774 Pulls 1 Tag Updated 1 year ago
753 Pulls 3 Tags Updated 1 year ago