Phi-3-mini-4K-instruct with CPO-SimPO
35 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
043cb9583fae · 2.1GB
model
archphi3
·
parameters3.82B
·
quantizationQ3_K_L
2.1GB
params
{"stop":["\u003c|end|\u003e","\u003c|user|\u003e","\u003c|assistant|\u003e"]}
78B
template
{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end
148B
license
Microsoft.
Copyright (c) Microsoft Corporation.
MIT License
Permission is hereby granted, free of
1.1kB
Readme
Phi-3-mini-4K-instruct with CPO-SimPO
- Quantizations with i-matrix
calibration_datav3.txt
- Saftensors converted to fp32
This repository contains the Phi-3-mini-4K-instruct model enhanced with the CPO-SimPO technique. CPO-SimPO combines Contrastive Preference Optimization (CPO) and Simple Preference Optimization (SimPO).
Introduction
Phi-3-mini-4K-instruct is a model optimized for instruction-based tasks. This approach has demonstrated notable improvements in key benchmarks, pushing the boundaries of AI preference learning.
What is CPO-SimPO?
CPO-SimPO is a novel technique, which combines elements from CPO and SimPO:
- Contrastive Preference Optimization (CPO): Adds a behavior cloning regularizer to ensure the model remains close to the preferred data distribution.
- Simple Preference Optimization (SimPO): Incorporates length normalization and target reward margins to prevent the generation of long but low-quality sequences.
Github
Model Performance
COMING SOON!
Key Improvements:
- Enhanced Model Performance: Significant score improvements, particularly in GSM8K (up by 8.49 points!) and TruthfulQA (up by 2.07 points).
- Quality Control: Improved generation of high-quality sequences through length normalization and reward margins.
- Balanced Optimization: The BC regularizer helps maintain the integrity of learned preferences without deviating from the preferred data distribution.