20 1 week ago

glogwa68: https://huggingface.co/glogwa68/granite-4.0-h-350m-DISTILL-gemini-think

tools
ollama run jewelzufo/granite4-350m-h-Distill-Gemini-Thinking

Applications

Claude Code
Claude Code ollama launch claude --model jewelzufo/granite4-350m-h-Distill-Gemini-Thinking
Codex
Codex ollama launch codex --model jewelzufo/granite4-350m-h-Distill-Gemini-Thinking
OpenCode
OpenCode ollama launch opencode --model jewelzufo/granite4-350m-h-Distill-Gemini-Thinking
OpenClaw
OpenClaw ollama launch openclaw --model jewelzufo/granite4-350m-h-Distill-Gemini-Thinking

Models

View all →

Readme

granite-4.0-h-350m-DISTILL-gemini-think

This model is a fine-tuned version of ibm-granite/granite-4.0-h-350m trained on high-reasoning conversational data from Gemini 3 Pro.

Model Details

  • Base Model: ibm-granite/granite-4.0-h-350m
  • Fine-tuning Dataset: TeichAI/gemini-3-pro-preview-high-reasoning-1000x
  • Context Length: 1048576 tokens
  • Special Feature: Thinking/Reasoning with <think> tags

Quantized Versions (GGUF)

🔗 GGUF versions available here: granite-4.0-h-350m-DISTILL-gemini-think-GGUF

Format Size Use Case
Q2_K Smallest Low memory, reduced quality
Q4_K_M Recommended Best balance
Q5_K_M Good Higher quality
Q8_0 Large Near lossless
F16 Largest Original precision

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("glogwa68/granite-4.0-h-350m-DISTILL-gemini-think")
tokenizer = AutoTokenizer.from_pretrained("glogwa68/granite-4.0-h-350m-DISTILL-gemini-think")

messages = [{"role": "user", "content": "Hello, how are you?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Ollama (GGUF)

ollama run hf.co/glogwa68/granite-4.0-h-350m-DISTILL-gemini-think-GGUF:Q4_K_M

llama.cpp

llama-cli --hf-repo glogwa68/granite-4.0-h-350m-DISTILL-gemini-think-GGUF --hf-file granite-4.0-h-350m-distill-gemini-think-q4_k_m.gguf -p "Hello"

Training Details

  • Epochs: 3
  • Learning Rate: 2e-5
  • Batch Size: 1 (with gradient accumulation)
  • Precision: FP16
  • Hardware: Multi-GPU with DeepSpeed ZeRO-3

License

Apache 2.0