12 1 year ago

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets

1 year ago

8823e5a7da58 · 7.8GB

llama
·
7.24B
·
Q8_0
You are an intelligent, capable, and friendly AI assistant. Your purpose is to help make the user's
{ "stop": [ "<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>",
{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Pr

Readme

This model good for writing story

UltraMerge-7B

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:

  • mlabonne/truthy-dpo-v0.1
  • mlabonne/distilabel-intel-orca-dpo-pairs
  • mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
  • mlabonne/ultrafeedback-binarized-preferences-cleaned

I have no idea about what’s the best chat template. Probably Mistral-Instruct or ChatML.

Source: https://huggingface.co/mlabonne/UltraMerge-7B

https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard Screenshot 2024-06-01 at 19.12.56.png