570 Downloads Updated 3 weeks ago
ollama run batiai/minimax-m2.7:q4
Quantized from official MiniMax weights. Verified on real Mac hardware.
| Tag | Size | VRAM | M4 Max (128GB) | Use Case |
|---|---|---|---|---|
| iq3 | 82GB | 104GB | 36.7 t/s | 128GB Mac |
ollama run batiai/minimax-m2.7:iq3
| Metric | IQ3_XXS |
|---|---|
| Token gen (short) | 22.1 t/s |
| Token gen (long) | 36.7 t/s |
| Prompt eval | 14.8 t/s |
| VRAM | 104 GB (97% GPU) |
| Cold start | 42 seconds |
| Korean | ✅ |
| Tool call JSON | ✅ |
| Your Mac RAM | IQ3_XXS (82GB) |
|---|---|
| 16GB | ❌ |
| 32GB | ❌ |
| 48GB | ❌ |
| 64GB | ❌ |
| 96GB | ⚠️ Heavy swap |
| 128GB | ✅ 36.7 t/s (104GB VRAM) |
| 192GB+ | ✅ Fast, with headroom |
This model requires 128GB+ unified memory. No workarounds — 229B Dense needs real RAM.
| Your Mac | Recommended Model |
|---|---|
| 16GB | batiai/gemma4-e4b:q4 (57 t/s) |
| 24GB | batiai/gemma4-26b:iq4 (85 t/s) |
| 48GB | batiai/gemma4-26b:iq4 or batiai/qwen3.5-35b:iq4 |
| 128GB | batiai/minimax-m2.7:iq3 (this model) |
Free, on-device AI automation for Mac. 5MB app, 100% local, unlimited.