DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.
577.1K Pulls 55 Tags Updated 4 months ago
Many quantized GGUF versions of deepseek R1 abliterated (uncensored) with tools support
3,823 Pulls 8 Tags Updated 8 months ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2,425 Pulls 5 Tags Updated 6 months ago
The mradermacher model of DeepSeek-r1 llama distilled with abliteration applied
2,369 Pulls 1 Tag Updated 8 months ago
181 Pulls 3 Tags Updated 6 months ago