23 Downloads Updated 1 year ago
Name
1 model
Size
Context
Input
lhk-dpo:v1-q5_K_M
9.1GB · 32K context window · Text · 1 year ago
9.1GB
32K
Text
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B.
Original model is from https://huggingface.co/HanNayeoniee/LHK_DPO_v1