38 Downloads Updated 1 year ago
Mistral-ORPO is a fine-tuned version of mistralai/Mistral-7B-v0.1 using the odds ratio preference optimization (ORPO). With ORPO, the model directly learns the preference without the supervised fine-tuning warmup phase.
Mistral-ORPO-ORPO-Capybara-7k is fine-tuned for 2.5 hours on four A100s exclusively on the 7k instances of the distilled Capybara paired multi-turn conversation dataset, argilla/distilabel-capybara-dpo-7k-binarized, by Argilla.
| Model Name | Size | Align | MT-Bench | AlpacaEval 2.0 (LC) | 
|---|---|---|---|---|
| Mistral-ORPO-Capybara-7k | 7B | ORPO | 7.44 | 15.9 | 
| Mistral-ORPO-β | 7B | ORPO | 7.32 | 14.7 | 
| Zephyr β | 7B | DPO | 7.34 | 13.2 | 
| TULU-2-DPO | 13B | DPO | 7.00 | 11.6 | 
| Llama-2-Chat | 7B | RLHF | 6.27 | 5.4 | 
| Llama-2-Chat | 13B | RLHF | 6.65 | 8.4 | 
| Model Type | Prompt-Strict | Prompt-Loose | Inst-Strict | Inst-Loose | 
|---|---|---|---|---|
| Mistral-ORPO-Capybara-7k | 0.5083 | 0.5083 | 0.5827 | 0.6127 | 
| Mistral-ORPO-⍺ | 0.5009 | 0.5083 | 0.5995 | 0.6163 | 
| Mistral-ORPO-β | 0.5287 | 0.5564 | 0.6355 | 0.6619 | 
