Phi-3-mini-4K-instruct with CPO-SimPO

36 4 months ago

Readme

Phi-3-mini-4K-instruct with CPO-SimPO

  • Quantizations with i-matrix calibration_datav3.txt
  • Saftensors converted to fp32

This repository contains the Phi-3-mini-4K-instruct model enhanced with the CPO-SimPO technique. CPO-SimPO combines Contrastive Preference Optimization (CPO) and Simple Preference Optimization (SimPO).

Introduction

Phi-3-mini-4K-instruct is a model optimized for instruction-based tasks. This approach has demonstrated notable improvements in key benchmarks, pushing the boundaries of AI preference learning.

What is CPO-SimPO?

CPO-SimPO is a novel technique, which combines elements from CPO and SimPO:

  • Contrastive Preference Optimization (CPO): Adds a behavior cloning regularizer to ensure the model remains close to the preferred data distribution.
  • Simple Preference Optimization (SimPO): Incorporates length normalization and target reward margins to prevent the generation of long but low-quality sequences.

Github

CPO-SIMPO

Model Performance

COMING SOON!

Key Improvements:

  • Enhanced Model Performance: Significant score improvements, particularly in GSM8K (up by 8.49 points!) and TruthfulQA (up by 2.07 points).
  • Quality Control: Improved generation of high-quality sequences through length normalization and reward margins.
  • Balanced Optimization: The BC regularizer helps maintain the integrity of learned preferences without deviating from the preferred data distribution.