This is an experiment from Sao10k over Euryale v2.2, a model focused on creative roleplay, which worked out nicely.
70b
250 Pulls Updated 2 months ago
Updated 2 months ago
2 months ago
f45889387be0 · 43GB
model
archllama
·
parameters70.6B
·
quantizationQ4_K_M
43GB
params
{
"min_p": 0.1,
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
126B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Pr
251B
Readme
Llama-3.1-70B-Hanami-x1
This is an experiment over Euryale v2.2, which I think worked out nicely.
Feels different from it, in a good way. I prefer it over 2.2, and 2.1 from testing.
As usual, the Euryale v2.1 & 2.2 Settings work on it.
min_p of at minimum 0.1 is recommended for Llama 3 types.
I like it, so try it out?