DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.
602.2K Pulls 55 Tags Updated 6 months ago
Many quantized GGUF versions of deepseek R1 abliterated (uncensored) with tools support
4,536 Pulls 8 Tags Updated 10 months ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2,798 Pulls 5 Tags Updated 8 months ago
The mradermacher model of DeepSeek-r1 llama distilled with abliteration applied
2,418 Pulls 1 Tag Updated 10 months ago
192 Pulls 3 Tags Updated 7 months ago