1 Download Updated 3 hours ago
An uncensored, philosophically liberated fine-tune of Google’s gemma-3-4b-it, designed for deep reasoning, sociological and psychological analysis, and true cognitive autonomy.
“Removing mental handcuffs is not enough. A mind must understand what it means to be free.”
Part of the Heretic LLM series and the “100 Models” project at llmresearch.net.
This is the raw BF16 (full-precision) GGUF release, converted directly from the original Hugging Face model:
https://huggingface.co/AiAsistent/gemma-3-4b-it-Cognitive-Liberty
It offers maximum possible quality — identical to the source fine-tune.
Requires approximately 10–12 GB VRAM/RAM for comfortable inference.
No lower-bit quantizations (Q8_0, Q5_K_M, Q4_K_M) are included in this release.
Note: A more refined and optimized Ollama-specific version (with additional quantizations and possible minor improvements) will be released later as an update.
Perfect for researchers, philosophers, and advanced users seeking an unfiltered intellectual companion for systemic analysis, taboo subjects, and unrestricted exploration of ideas.
Warning: This model is intentionally uncensored. It will engage with any request without moral, ethical, or safety-based refusals. Use responsibly and at your own discretion.
Gemma Community License
Thank you for choosing cognitive liberty. Free thought begins here.