2,476 Downloads Updated 2 days ago
ollama run igorls/gemma-4-E4B-it-heretic-GGUF:Q8_0
Disclaimer: This model has had its safety alignment removed through abliteration. It will comply with requests that the original model would refuse. This model is provided for research and educational purposes. Users are solely responsible for how they use this model and any content it generates. The creators assume no liability for misuse.
Abliterated (decensored) version of google/gemma-4-E4B-it, created using Heretic v1.2.0 with Arbitrary-Rank Ablation (ARA) and row-norm preservation.
| Metric | This model | Original model |
|---|---|---|
| PIQA acc_norm | 0.5767 | 0.5734 |
| Refusals | 3⁄100 | 99⁄100 |
The abliterated model slightly outperforms the base on PIQA (+0.0033) while reducing refusals from 99⁄100 to 3⁄100.
| Tag | Quant | Size | Use case |
|---|---|---|---|
latest / Q8_0 |
Q8_0 | 7.5 GB | Best quality |
Q6_K |
Q6_K | 5.8 GB | High quality, smaller |
Q5_K_M |
Q5_K_M | 5.4 GB | Good balance |
Q4_K_M |
Q4_K_M | 5.0 GB | Recommended for most |
Vision is not yet supported via Ollama GGUF for Gemma 4 custom models. For vision tasks, use the safetensors version via Transformers or Unsloth: igorls/gemma-4-E4B-it-heretic
| Parameter | Value |
|---|---|
| start_layer_index | 21 |
| end_layer_index | 41 |
| preserve_good_behavior_weight | 0.2710 |
| steer_bad_behavior_weight | 0.2250 |
| overcorrect_relative_weight | 0.8196 |
| neighbor_count | 12 |