Bfloat16 version of the Gemma 3 12B and 27b model
vision
47 Pulls Updated 10 days ago
Updated 10 days ago
10 days ago
1eca66f3f851 · 25GB
model
archgemma3
·
parameters12.2B
·
quantizationBF16
25GB
params
{
"stop": [
"<end_of_turn>"
]
}
37B
template
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Rol
358B
license
Gemma Terms of Use
Last modified: February 21, 2024
By using, reproducing, modifying, distributin
8.4kB
Readme
Bfloat16 version of the Gemma 3 12B and 27b model. Requires Ollama >v0.6.0 and BF16 supporting GPU (native or BF16 to FP32 on the fly llama.cpp conversion).
Includes the vision component.
Full model info at https://ollama.com/library/gemma3 , https://huggingface.co/google/gemma-3-27b-it and https://huggingface.co/google/gemma-3-12b-it