364 Downloads Updated 1 month ago
Name
8 models
grok-2:latest
164GB · 128K context window · Text · 1 month ago
grok-2:Q2_K
100GB · 128K context window · Text · 1 month ago
grok-2:Q3_K_M
130GB · 128K context window · Text · 1 month ago
grok-2:Q4_K_M
164GB · 128K context window · Text · 1 month ago
grok-2:Q5_K_M
192GB · 128K context window · Text · 1 month ago
grok-2:Q6_K
221GB · 128K context window · Text · 1 month ago
grok-2:IQ1_M
95GB · 128K context window · Text · 1 month ago
grok-2:TQ1_0
82GB · 128K context window · Text · 1 month ago
Powered by xAI
This is a quantized version of the Grok 2 model, provided in GGUF format for compatibility with llama.cpp and Ollama.
The quantized GGUF weights were merged from the sharded release provided by Unsloth using the official llama.cpp utilities:
llama-gguf-split --merge
Resulting in a single GGUF file suitable for use with Ollama.
After installing Ollama, you can run the model locally with:
ollama run MichelRosselli/grok-2
The weights are licensed under the Grok 2 Community License Agreement.
This product includes materials licensed under the xAI Community License. Copyright © 2025 xAI. All rights reserved.