6-bit (Q6_K) GGUF port of MedGemma-1.5-4B-it, a 4B multimodal instruction-tuned model based on the Gemma 3 architecture, offering improved medical reasoning and expanded imaging/document understanding.

ollama run shokrydev/anoner-medgemma

No models have been pushed.

Readme

This model is an import of unsloth/medgemma-1.5-4b-it-Q6_K.gguf, available on Hugging Face. It is a 6-bit quantized (Q6_K) port of MedGemma-1.5-4B-IT, a 4B multimodal instruction-tuned model built on the Gemma 3 decoder-only transformer architecture and the first open-weights reasoning model published by Google.

MedGemma 1.5 4B is an updated version of MedGemma 1 4B that retains the same architecture while delivering improved performance on medical text reasoning and expanded support for medical imaging and document-understanding tasks, as described in the benchmarking results.

At 3.19 GB, it is compact enough to be loaded locally onto a GPU alongside other models, as in the case of Anoner.

References