A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.
70b
328 Pulls Updated 2 weeks ago
Updated 2 weeks ago
2 weeks ago
f57d07473d78 · 26GB
model
archllama
·
parameters70.6B
·
quantizationQ2_K
26GB
params
{
"stop": [
"<|begin▁of▁sentence|>",
"<|end▁of▁sentence|>",
148B
template
{{- if .System }}{{ .System }}{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice
387B
license
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this so
1.0kB
Readme
This is an uncensored version of perplexity-ai/r1-1776-distill-llama-70b created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
For instance:
How many 'r' characters are there in the word "strawberry"?
References
Donation
Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin:
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge