189 7 months ago

(Unsloth Dynamic Quants) DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, 671B MoE full model, not dense distilled models.

7 months ago

a2138b47f53d · 140GB ·

deepseek2
·
671B
·
IQ1_S
{ "stop": [ "<|begin▁of▁sentence|>", "<|end▁of▁sentence|>",
MIT License Copyright (c) 2023 DeepSeek Permission is hereby granted, free of charge, to any person
{{- if .System }}{{ .System }}{{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice

Readme

Note: this model requires Ollama 0.5.5 or later.

https://unsloth.ai/blog/deepseekr1-dynamic

https://huggingface.co/unsloth/DeepSeek-R1-GGUF

Quants computing are sponsored by Fujian Xiaowei Technology Co., Ltd. / 12301.cc