Capybara is an un-aligned model for general use, leveraging Amplify-Instruct and novel quality curation techniques, made with a dataset of less than 20K examples.

7B

34 Pulls Updated 4 months ago

Readme

Nous-Capybara-7B V1.9

https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9

This is currently the best 7B version of Capybara to use

What’s new compared to V1?: V1.9 now leverages novel unalignment techniques that lead to more consistent and dynamic control, we also worked on enhanced quality curation for training data and a significantly better foundation model(Mistral)!

The Capybara series is the first Nous collection of dataset and models made by fine-tuning mostly on data created by Nous in-house.

We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin).

While performing great in it’s current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples.