94 4 months ago

Gemma3:12b quantized to Q2_K and Q3_K_S for GPU with 8gb Vram or less. Vision module fully working.

Models

View all →

Readme

No readme