Starling-LM-10.7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF)

112 Pulls Updated 5 months ago

Readme

This is Starling-LM-10.7B-beta, a depth-upscaled version of Nexusflow/Starling-LM-7B-beta.

This model is intended to be used as a drop-in upgrade from the original 7 billion parameter model.

We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO). Harnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.

Important: The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less. Default temperature is set to 0.1

@HuggingFace https://huggingface.co/bartowski/Starling-LM-10.7B-beta-GGUF