gemma-3-27b-instruct-uncensored

vision

1,568 2 weeks ago

Readme

🧠 nidum-gemma-3-27B-Instruct-Uncensored (GGUF)

This repository provides the GGUF-quantized versions of nidum-gemma-3-27B-Instruct-Uncensored for use with Ollama and other GGUF-compatible backends.

πŸ“Œ Available Quantized Models

The following quantized versions are available:

Model Variant Size
q8_0 ~28GB
q6_k ~21GB
q5_k_m ~18GB
q3_k_m ~12GB

πŸš€ Installation

Step 1: Install Ollama

If you haven’t installed Ollama, do so using:

curl -fsSL https://ollama.com/install.sh | sh

⚑ Usage

Run a Quantized Model

After installing Ollama, run the model directly:

ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q8_0

To use another quantization, replace q8_0 with q6_k, q5_k_m, or q3_k_m:

ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q6_k

πŸ“₯ Downloading the Model Locally

If you prefer to download and store the models locally:

ollama pull nidumai/nidum-gemma-3-27b-instruct-uncensored:q6_k

This will store the model on your system for offline use.


πŸ“œ Model Details

  • Base Model: Gemma-3-27B
  • Training Data: Multi-turn conversational AI
  • Uncensored: Designed for unrestricted responses
  • Inference Engine: Optimized for GGUF-compatible backends

βš™οΈ Advanced Configuration

You can adjust temperature and stopping conditions when running:

ollama run nidumai/nidum-gemma-3-27b-instruct-uncensored:q8_0 --temperature 0.7 --stop "<end_of_turn>"

Enjoy using nidum-gemma-3-27B-Instruct-Uncensored! πŸŽ‰