This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets

7B

9 Pulls Updated 3 months ago

Readme

This model good for writing story

UltraMerge-7B

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:

  • mlabonne/truthy-dpo-v0.1
  • mlabonne/distilabel-intel-orca-dpo-pairs
  • mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
  • mlabonne/ultrafeedback-binarized-preferences-cleaned

I have no idea about what’s the best chat template. Probably Mistral-Instruct or ChatML.

Source: https://huggingface.co/mlabonne/UltraMerge-7B

https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
Screenshot 2024-06-01 at 19.12.56.png