gemma-3-27b-instruct-uncensored
vision
1,568 Pulls Updated 2 weeks ago
Readme
π§ nidum-gemma-3-27B-Instruct-Uncensored (GGUF)
This repository provides the GGUF-quantized versions of nidum-gemma-3-27B-Instruct-Uncensored
for use with Ollama and other GGUF-compatible backends.
π Available Quantized Models
The following quantized versions are available:
Model Variant | Size |
---|---|
q8_0 |
~28GB |
q6_k |
~21GB |
q5_k_m |
~18GB |
q3_k_m |
~12GB |
π Installation
Step 1: Install Ollama
If you havenβt installed Ollama, do so using:
curl -fsSL https://ollama.com/install.sh | sh
β‘ Usage
Run a Quantized Model
After installing Ollama, run the model directly:
ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q8_0
To use another quantization, replace q8_0
with q6_k
, q5_k_m
, or q3_k_m
:
ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q6_k
π₯ Downloading the Model Locally
If you prefer to download and store the models locally:
ollama pull nidumai/nidum-gemma-3-27b-instruct-uncensored:q6_k
This will store the model on your system for offline use.
π Model Details
- Base Model:
Gemma-3-27B
- Training Data: Multi-turn conversational AI
- Uncensored: Designed for unrestricted responses
- Inference Engine: Optimized for GGUF-compatible backends
βοΈ Advanced Configuration
You can adjust temperature and stopping conditions when running:
ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q8_0 --temperature 0.7 --stop "<end_of_turn>"