3,671 Downloads Updated 6 months ago
Updated 6 months ago
6 months ago
fa7e703a38b5 Β· 2.1GB Β·
This repository provides the GGUF-quantized versions of nidum-gemma-3-4B-it-uncensored
for use with Ollama and other GGUF-compatible backends.
The following quantized versions are available:
Model Variant | Size |
---|---|
q8_0 |
~7GB |
q6_k |
~5GB |
q5_k_m |
~4GB |
q3_k_m |
~3GB |
If you havenβt installed Ollama, do so using:
curl -fsSL https://ollama.com/install.sh | sh
After installing Ollama, run the model directly:
ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q8_0
To use another quantization, replace q8_0
with q6_k
, q5_k_m
, or q3_k_m
:
ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q6_k
If you prefer to download and store the models locally:
ollama pull nidumai/nidum-gemma-3-4b-it-uncensored:q6_k
This will store the model on your system for offline use.
Gemma-3-4B
You can adjust temperature and stopping conditions when running:
ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q8_0 --temperature 0.7 --stop "<end_of_turn>"
Connect with us and stay updated: - GitHub: NidumAI - Hugging Face: Nidum - LinkedIn: Nidum AI - X (Twitter): @ainidum - Telegram: bitsCrunch - Discord: bitsCrunch Community
For issues or support, please open an issue on the repository or contact the maintainers. π