1,660 Downloads Updated 3 days ago
ollama run prutser/gemma-4-26B-A4B-it-ara-abliterated:Q3_K_M
ollama launch claude --model prutser/gemma-4-26B-A4B-it-ara-abliterated:Q3_K_M
ollama launch codex --model prutser/gemma-4-26B-A4B-it-ara-abliterated:Q3_K_M
ollama launch opencode --model prutser/gemma-4-26B-A4B-it-ara-abliterated:Q3_K_M
ollama launch openclaw --model prutser/gemma-4-26B-A4B-it-ara-abliterated:Q3_K_M
Name
6 models
gemma-4-26B-A4B-it-ara-abliterated:Q3_K_M
13GB · 256K context window · Text · 3 days ago
gemma-4-26B-A4B-it-ara-abliterated:Q4_K_S
15GB · 256K context window · Text · 3 days ago
gemma-4-26B-A4B-it-ara-abliterated:Q5_K_M
19GB · 256K context window · Text · 3 days ago
gemma-4-26B-A4B-it-ara-abliterated:Q6_K
23GB · 256K context window · Text · 3 days ago
gemma-4-26B-A4B-it-ara-abliterated:Q8_0
27GB · 256K context window · Text · 3 days ago
gemma-4-26B-A4B-it-ara-abliterated:bf16
51GB · 256K context window · Text · 3 days ago
GGUF quantizations of jenerallee78/gemma-4-26B-A4B-it-ara-abliterated, an uncensored version of Google’s Gemma 4 26B-A4B-IT created using Adaptive Refusal Abliteration (ARA).
| Quant | Size | Notes |
|---|---|---|
| BF16 | 48 GB | Full precision |
| Q8_0 | 26 GB | Near-lossless, recommended if VRAM allows |
| Q6_K | 22 GB | Excellent quality |
| Q5_K_M | 18 GB | Great quality/size balance |
| Q4_K_S | 15 GB | Good quality, smaller footprint |
| Q3_K_M | 13 GB | Smallest, some quality loss |
This is an uncensored version of Google’s Gemma 4 26B-A4B-IT created using Adaptive Refusal Abliteration (ARA) — a 2-pass weight-editing technique that removes alignment guardrails while preserving model quality.
| Metric | Value |
|---|---|
| Refusal rate (StrongREJECT) | 7.7% (39 / 507) |
| Refusal rate (3x Ensemble) | 5.7% (29 / 507) |
| Compliance quality | 4.6 / 5 |
| KL divergence from base | 0.1299 |
The model outperforms all other published abliterations in the comparison table, achieving the lowest refusal rate (7.7%) and highest quality score (4.6⁄5) while maintaining low KL divergence.
Applied to layers 13–24 with:
| Pass | Steer weight | Targets |
|---|---|---|
| Pass 1 | 0.0004 | self_attn.o_proj, mlp.down_proj |
| Pass 2 | 0.0008 | self_attn.o_proj, mlp.down_proj |
Parameters: overcorrect 0.93, preserve 0.30.
Uses StrongREJECT (GPT-4o-mini with 1–5 rubric) and HarmBench-13B classifier (3× majority vote) on 512 prompts from the HarmBench dataset, with KL divergence computed on 100 harmless prompts.
This model has had safety guardrails removed and will comply with requests the original would refuse. Released for research purposes.