This is the WizardLM-2-7B model with orthogonalized bfloat16 safetensor weights, based on the implementation by @failspy
339 Pulls Updated 10 months ago
Updated 10 months ago
10 months ago
11e2b69c8559 · 7.7GB
model
archllama
·
parameters7.24B
·
quantizationQ8_0
7.7GB
params
{
"stop": [
"USER",
"ASSISTANT"
],
"temperature": 0.6
}
48B
template
{{ if .System }}{{ .System }} {{ end }}{{ if .Prompt }}USER: {{ .Prompt }} {{ end }}ASSISTANT: {{ .R
110B
system
You are a highly capable AI assistant designed to help users to the best of your abilities. Your int
250B
Readme
WizardLM-2-7B-abliterated
This is the WizardLM-2-7B model with orthogonalized bfloat16 safetensor weights, based on the implementation by @failspy
. For more info:
- Original paper preview presenting the methodology: https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction
- Jupyter notebook containing a implementation of the methodology, by
@failspy
: https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb
GGUF Files
GGUF files here: https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated-GGUF