huihui_ai/
perplexity-ai-r1-abliterated:70b-distill-llama-q3_K_M

756 6 months ago

A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.

70b

6 months ago

c0a7ee9660a9 · 34GB

llama
·
70.6B
·
Q3_K_M
{{- if .System }}{{ .System }}{{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice
MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this sof
{ "stop": [ "<|begin▁of▁sentence|>", "<|end▁of▁sentence|>",

Readme

This is an uncensored version of perplexity-ai/r1-1776-distill-llama-70b created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

For instance:

  How many 'r' characters are there in the word "strawberry"?

References

Huggingface

x.com/support_huihui

Donation

Your donation helps us continue our further development and improvement, a cup of coffee can do it.
  • bitcoin:
  bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge