180 Downloads Updated yesterday
ollama run batiai/gemma4-e2b:q4
ollama launch claude --model batiai/gemma4-e2b:q4
ollama launch codex --model batiai/gemma4-e2b:q4
ollama launch opencode --model batiai/gemma4-e2b:q4
ollama launch openclaw --model batiai/gemma4-e2b:q4
Quantized from official Google weights. Verified on real Mac hardware.
| Tag | Size | VRAM | 16GB Mac mini M4 | M4 Max (128GB) | Use Case |
|---|---|---|---|---|---|
| q4 (latest) | 3.2GB | 7.1GB | 107.8 t/s ✅ | 132.5 t/s | 16GB Mac recommended |
| q6 | 3.6GB | 7.5GB | 45.5 t/s ✅ | 117.5 t/s | Higher quality, fits 16GB |
ollama run batiai/gemma4-e2b
| Model | Size | VRAM | 16GB Mac mini M4 |
|---|---|---|---|
| batiai/gemma4-e2b:q4 | 3.2GB | 7.1GB | 107.8 t/s |
| batiai/gemma4-e4b:q4 | 5.0GB | 10GB | 57.1 t/s |
| batiai/qwen3.5-9b:q4 | 5.6GB | — | 12.5 t/s |
Gemma 4 E2B is the smallest and fastest model we ship — ideal for quick responses and low memory usage. For better tool calling accuracy, use E4B.
Free, on-device AI automation for Mac. 5MB app, 100% local, unlimited.