3,671 6 months ago

nidum-gemma-3-4B-it-uncensored

Models

View all →

Readme

🧠 nidum-gemma-3-4B-it-uncensored (GGUF)

This repository provides the GGUF-quantized versions of nidum-gemma-3-4B-it-uncensored for use with Ollama and other GGUF-compatible backends.

📌 Available Quantized Models

The following quantized versions are available:

Model Variant Size
q8_0 ~7GB
q6_k ~5GB
q5_k_m ~4GB
q3_k_m ~3GB

🚀 Installation

Step 1: Install Ollama

If you haven’t installed Ollama, do so using:

curl -fsSL https://ollama.com/install.sh | sh

⚡ Usage

Run a Quantized Model

After installing Ollama, run the model directly:

ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q8_0

To use another quantization, replace q8_0 with q6_k, q5_k_m, or q3_k_m:

ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q6_k

📥 Downloading the Model Locally

If you prefer to download and store the models locally:

ollama pull nidumai/nidum-gemma-3-4b-it-uncensored:q6_k

This will store the model on your system for offline use.


📜 Model Details

  • Base Model: Gemma-3-4B
  • Training Data: Multi-turn conversational AI
  • Uncensored: Designed for unrestricted responses
  • Inference Engine: Optimized for GGUF-compatible backends

⚙️ Advanced Configuration

You can adjust temperature and stopping conditions when running:

ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q8_0 --temperature 0.7 --stop "<end_of_turn>"

🌍 Join the Community

Connect with us and stay updated: - GitHub: NidumAI - Hugging Face: Nidum - LinkedIn: Nidum AI - X (Twitter): @ainidum - Telegram: bitsCrunch - Discord: bitsCrunch Community


📞 Support & Issues

For issues or support, please open an issue on the repository or contact the maintainers. 🚀

Enjoy using nidum-gemma-3-4B-it-uncensored! 🎉