12.6K Downloads Updated 1 year ago
Name
1 model
Size
Context
Input
phi3-128k:latest
2.7GB · 128K context window · Text · 1 year ago
2.7GB
128K
Text
Convert from PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed, adopt Q5_K_8_4 quantization.
Its multilingual capabilities are clearly superior to version Q4 quantization.