4,414 Downloads Updated 6 months ago
Name
4 models
nidum-gemma-3-27b-instruct-uncensored:q3_k_m
13GB · 128K context window · Text · 6 months ago
nidum-gemma-3-27b-instruct-uncensored:q5_k_m
19GB · 128K context window · Text · 6 months ago
nidum-gemma-3-27b-instruct-uncensored:q6_k
22GB · 128K context window · Text · 6 months ago
nidum-gemma-3-27b-instruct-uncensored:q8_0
29GB · 128K context window · Text · 6 months ago
This repository provides the GGUF-quantized versions of nidum-gemma-3-27B-Instruct-Uncensored
for use with Ollama and other GGUF-compatible backends.
The following quantized versions are available:
Model Variant | Size |
---|---|
q8_0 |
~28GB |
q6_k |
~21GB |
q5_k_m |
~18GB |
q3_k_m |
~12GB |
If you haven’t installed Ollama, do so using:
curl -fsSL https://ollama.com/install.sh | sh
After installing Ollama, run the model directly:
ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q8_0
To use another quantization, replace q8_0
with q6_k
, q5_k_m
, or q3_k_m
:
ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q6_k
If you prefer to download and store the models locally:
ollama pull nidumai/nidum-gemma-3-27b-instruct-uncensored:q6_k
This will store the model on your system for offline use.
Gemma-3-27B
You can adjust temperature and stopping conditions when running:
ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q8_0 --temperature 0.7 --stop "<end_of_turn>"