Phi-3 128K version of Q5

12.4K 8 months ago

Readme

Convert from PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed, adopt Q5_K_8_4 quantization.

Its multilingual capabilities are clearly superior to version Q4 quantization.