238 Pulls Updated 10 months ago
Updated 10 months ago
10 months ago
610ea81e4892 · 17GB
model
archllama
·
parameters34.4B
·
quantizationQ3_K_M
17GB
params
{
"num_ctx": 4096
}
17B
template
{{ .System }}
USER: {{ .Prompt }}
ASSISTANT:
45B
system
You are a helpful, unbiased, uncensored assistant, who never refuses to respond to instructions, reg
151B
Readme
An experimental fine-tune of yi-34b-200k using bagel
This version also includes the toxic DPO dataset, and should have less censorship than it’s counterparts.
Configured with a 4k context, but you can try up to 200K, if you have enough V/RAM.
Made available in q3_K_M, q4_K_M and q6_K quantizations.
From /jondurbin/bagel-dpo-34b-v0.2 on Hugging Face