108 6 months ago

Tool enabled, quantized version Q2_K of Phi4:14b

tools

Models

View all →

Readme

This is my very first quantization and template for Phi4 on ollama. It seems to work on function calling on a low VRAM usage.

I recommend to use it with variables like this: OLLAMA_FLASH_ATTENTION=true OLLAMA_KV_CACHE_TYPE=q8_0

It works kind of good in my old 4Gb VRAM laptop :)