367 1 month ago

Grok 2, a model trained and used at xAI in 2024.

1 month ago

7b70de774e9b · 221GB ·

grok
·
270B
·
Q6_K
{{- range $i, $m := .Messages -}} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{- if eq $m.Rol
Grok 2 Community License Agreement Last Updated: August 23, 2025 1. Background and Definitions By d
{ "stop": [ "<|separator|>" ] }

Readme

Grok 2 (Quantized GGUF)

Powered by xAI


Model Details

  • Original Model: grok-2
  • Created by: xAI
  • Release year: 2024

This is a quantized version of the Grok 2 model, provided in GGUF format for compatibility with llama.cpp and Ollama.


Sources


Packaging

The quantized GGUF weights were merged from the sharded release provided by Unsloth using the official llama.cpp utilities:

llama-gguf-split --merge

Resulting in a single GGUF file suitable for use with Ollama.


Usage

After installing Ollama, you can run the model locally with:

ollama run MichelRosselli/grok-2

License

The weights are licensed under the Grok 2 Community License Agreement.

This product includes materials licensed under the xAI Community License. Copyright © 2025 xAI. All rights reserved.