3,671 6 months ago

nidum-gemma-3-4B-it-uncensored

6 months ago

0de6dff4781b Β· 3.2GB Β·

gemma3
Β·
3.88B
Β·
Q6_K
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "u
{ "stop": [ "<end_of_turn>" ], "temperature": 0.1 }

Readme

🧠 nidum-gemma-3-4B-it-uncensored (GGUF)

This repository provides the GGUF-quantized versions of nidum-gemma-3-4B-it-uncensored for use with Ollama and other GGUF-compatible backends.

πŸ“Œ Available Quantized Models

The following quantized versions are available:

Model Variant Size
q8_0 ~7GB
q6_k ~5GB
q5_k_m ~4GB
q3_k_m ~3GB

πŸš€ Installation

Step 1: Install Ollama

If you haven’t installed Ollama, do so using:

curl -fsSL https://ollama.com/install.sh | sh

⚑ Usage

Run a Quantized Model

After installing Ollama, run the model directly:

ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q8_0

To use another quantization, replace q8_0 with q6_k, q5_k_m, or q3_k_m:

ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q6_k

πŸ“₯ Downloading the Model Locally

If you prefer to download and store the models locally:

ollama pull nidumai/nidum-gemma-3-4b-it-uncensored:q6_k

This will store the model on your system for offline use.


πŸ“œ Model Details

  • Base Model: Gemma-3-4B
  • Training Data: Multi-turn conversational AI
  • Uncensored: Designed for unrestricted responses
  • Inference Engine: Optimized for GGUF-compatible backends

βš™οΈ Advanced Configuration

You can adjust temperature and stopping conditions when running:

ollama run nidumai/nidum-gemma-3-4b-it-uncensored:q8_0 --temperature 0.7 --stop "<end_of_turn>"

🌍 Join the Community

Connect with us and stay updated: - GitHub: NidumAI - Hugging Face: Nidum - LinkedIn: Nidum AI - X (Twitter): @ainidum - Telegram: bitsCrunch - Discord: bitsCrunch Community


πŸ“ž Support & Issues

For issues or support, please open an issue on the repository or contact the maintainers. πŸš€

Enjoy using nidum-gemma-3-4B-it-uncensored! πŸŽ‰