DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.
608.9K Pulls 55 Tags Updated 6 months ago
Many quantized GGUF versions of deepseek R1 abliterated (uncensored) with tools support
4,613 Pulls 8 Tags Updated 10 months ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
2,835 Pulls 5 Tags Updated 8 months ago
The mradermacher model of DeepSeek-r1 llama distilled with abliteration applied
2,428 Pulls 1 Tag Updated 11 months ago
200 Pulls 3 Tags Updated 8 months ago