Phi-3-mini-4K-instruct with CPO-SimPO
36 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
cdbb9b2b7e66 · 2.4GB
model
archphi3
·
parameters3.82B
·
quantizationQ4_1
2.4GB
params
{
"stop": [
"<|end|>",
"<|user|>",
"<|assistant|>"
]
}
78B
template
{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end
148B
license
Microsoft.
Copyright (c) Microsoft Corporation.
MIT License
Permission is hereby granted, free of
1.1kB
Readme
Phi-3-mini-4K-instruct with CPO-SimPO
- Quantizations with i-matrix
calibration_datav3.txt
- Saftensors converted to fp32
This repository contains the Phi-3-mini-4K-instruct model enhanced with the CPO-SimPO technique. CPO-SimPO combines Contrastive Preference Optimization (CPO) and Simple Preference Optimization (SimPO).
Introduction
Phi-3-mini-4K-instruct is a model optimized for instruction-based tasks. This approach has demonstrated notable improvements in key benchmarks, pushing the boundaries of AI preference learning.
What is CPO-SimPO?
CPO-SimPO is a novel technique, which combines elements from CPO and SimPO:
- Contrastive Preference Optimization (CPO): Adds a behavior cloning regularizer to ensure the model remains close to the preferred data distribution.
- Simple Preference Optimization (SimPO): Incorporates length normalization and target reward margins to prevent the generation of long but low-quality sequences.
Github
Model Performance
COMING SOON!
Key Improvements:
- Enhanced Model Performance: Significant score improvements, particularly in GSM8K (up by 8.49 points!) and TruthfulQA (up by 2.07 points).
- Quality Control: Improved generation of high-quality sequences through length normalization and reward margins.
- Balanced Optimization: The BC regularizer helps maintain the integrity of learned preferences without deviating from the preferred data distribution.