A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.
148 Pulls Updated 4 weeks ago
Updated 4 weeks ago
4 weeks ago
154dbc8fc88c · 244GB
model
archdeepseek2
·
parameters671B
·
quantizationQ2_K
244GB
params
{
"num_gpu": 1,
"stop": [
"<|begin▁of▁sentence|>",
"<|end▁of▁s
160B
template
{{- if .System }}{{ .System }}{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice
387B
license
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this so
1.0kB
Readme
This model converted from r1-1776 to GGUF, Even GPUs with a minimum memory of 8 GB can try it.
GGUF: Q2_K, Q3_K_M, Q4_K_M, Q8_0 all support.
The specific usage method please refer to the Reference link.