Nous-Capybara-34B V1.9 in select quantizations
34b
927 Pulls Updated 12 months ago
Updated 12 months ago
12 months ago
717defcb0d4b · 17GB
model
archllama
·
parameters34.4B
·
quantizationQ3_K_M
17GB
template
USER: {{ .Prompt }}
ASSISTANT:
30B
params
{
"num_ctx": 10240,
"stop": [
"</s>"
]
}
43B
Readme
Nous-Capybara-34B V1.9
This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!
I’m providing q4_K_M & q3_K_M quantizations. Models are set up for 10k context. You can try larger contexts with >>>/set parameter num_ctx XXXXXXX
in the CLI, or by creating a custom model.