367 Downloads Updated 1 month ago
Updated 1 month ago
1 month ago
fe3b66f84f1c · 100GB ·
Powered by xAI
This is a quantized version of the Grok 2 model, provided in GGUF format for compatibility with llama.cpp and Ollama.
The quantized GGUF weights were merged from the sharded release provided by Unsloth using the official llama.cpp utilities:
llama-gguf-split --merge
Resulting in a single GGUF file suitable for use with Ollama.
After installing Ollama, you can run the model locally with:
ollama run MichelRosselli/grok-2
The weights are licensed under the Grok 2 Community License Agreement.
This product includes materials licensed under the xAI Community License. Copyright © 2025 xAI. All rights reserved.