223 3 weeks ago

Gemma 4 26B MoE (Google DeepMind) with thinking mode enabled. Mixture-of-Experts — 25.2B total / 3.8B active parameters, 256K context. Supports text and image input. Knowledge cutoff: January 2025.

vision tools thinking
ollama run bjoernb/gemma4-26b-think

Applications

Claude Code
Claude Code ollama launch claude --model bjoernb/gemma4-26b-think
OpenClaw
OpenClaw ollama launch openclaw --model bjoernb/gemma4-26b-think
Hermes Agent
Hermes Agent ollama launch hermes --model bjoernb/gemma4-26b-think
Codex
Codex ollama launch codex --model bjoernb/gemma4-26b-think
OpenCode
OpenCode ollama launch opencode --model bjoernb/gemma4-26b-think

Models

View all →

Readme

No readme