rjmalagon/
gemma-3:27b-it-q8_0

162 6 months ago

Bfloat16 version of the Gemma 3 12B and 27b model

vision

6 months ago

4abd8d0ad6f6 · 30GB ·

gemma3
·
27B
·
Q8_0
clip
·
423M
·
F32
Gemma Terms of Use Last modified: February 21, 2024 By using, reproducing, modifying, distributing,
{ "stop": [ "<end_of_turn>" ] }
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if or (eq .Rol

Readme

Bfloat16 version of the Gemma 3 12B and 27b model. Requires Ollama >v0.6.0 and BF16 supporting GPU (native or BF16 to FP32 on the fly llama.cpp conversion).

Includes the vision component.

Full model info at https://ollama.com/library/gemma3 , https://huggingface.co/google/gemma-3-27b-it and https://huggingface.co/google/gemma-3-12b-it