129 Downloads Updated yesterday
ollama run batiai/gemma4-e4b:q4
ollama launch claude --model batiai/gemma4-e4b:q4
ollama launch codex --model batiai/gemma4-e4b:q4
ollama launch opencode --model batiai/gemma4-e4b:q4
ollama launch openclaw --model batiai/gemma4-e4b:q4
Quantized from official Google weights. Verified on real Mac hardware.
| Tag | Size | VRAM | 16GB Mac mini M4 | M4 Max (128GB) | Use Case |
|---|---|---|---|---|---|
| q4 (latest) | 5.0GB | 10GB | 57.1 t/s ✅ | 84.0 t/s | 16GB Mac recommended |
| q6 | 5.8GB | 11GB | 45.0 t/s ✅ | 77.4 t/s | 16GB Mac, higher quality |
ollama run batiai/gemma4-e4b
| Model | Size | VRAM | 16GB Mac mini M4 | Tool Call |
|---|---|---|---|---|
| batiai/gemma4-e2b:q4 | 3.2GB | 7.1GB | 107.8 t/s | ⚠️ |
| batiai/gemma4-e4b:q4 | 5.0GB | 10GB | 57.1 t/s | ✅ |
| batiai/qwen3.5-9b:q4 | 5.6GB | — | 12.5 t/s | ✅ |
| gemma4:e4b (official) | 9.6GB | — | 27.7 t/s | ✅ |
BatiAI E4B Q4 is half the size of official gemma4:e4b, 2x faster, with tool calling support.
Free, on-device AI automation for Mac. 5MB app, 100% local, unlimited.