1,238 1 year ago

Nous-Capybara-34B V1.9 in select quantizations

34b

1 year ago

b977635d75a7 · 21GB

llama
·
34.4B
·
Q4_K_M
USER: {{ .Prompt }} ASSISTANT:
{ "num_ctx": 10240, "stop": [ "</s>" ] }

Readme

Nous-Capybara-34B V1.9

This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!

I’m providing q4_K_M & q3_K_M quantizations. Models are set up for 10k context. You can try larger contexts with >>>/set parameter num_ctx XXXXXXX in the CLI, or by creating a custom model.

NousResearch/Nous-Capybara-34B on Hugging Face