This model was developed using Self-Play Preference Optimization at iteration 3, based on the google/gemma-2-9b-it architecture as starting point.

9B

636 Pulls Updated 7 weeks ago

1f0c17ce1cdb · 118B
{ "num_ctx": 4096, "num_predict": 4096, "repeat_penalty": 1, "stop": [ "<start_of_turn>", "<end_of_turn>" ] }