Quantized version

14B

30 Pulls Updated 4 months ago

Readme

Phi-3-medium

Not working yet.

GGUF conversion of Phi-3-medium, instruct 4k context version: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct


The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants 4K and 128K which is the context length (in tokens) that it can support.

The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.

Resources and Technical Documentation:

Phi-3 Microsoft Blog
Phi-3 Technical Report
Phi-3 on Azure AI Studio
Phi-3 Cookbook