4 1 month ago

notConfucius is a fine-tuning experiment. llama3.1:8b base with a carefully designed dataset to reflect a particular cognitive persona - not wise, not coherent, maddening, and occasionally enlightening. it’s less a model and more a vibe.

Models

View all →

Readme

notConfucius copy.png

This model is the result of a “cognitive persona” fine-tuning experiment. I wanted to create a model via supervised fine-tuning that can employ multiple core cognitive processes.

The Technical Details

Base Model: meta-llama/Meta-Llama-3.1-8B

Technique: Parameter-Efficient Fine-Tuning (PEFT) using LoRA.

Framework: Trained using unsloth for high-speed, memory-efficient training on a single GPU.

Format: This is a Q8_0 GGUF quantization, with the LoRA adapter fully merged.

Dataset: A large, custom dataset of ~1100 instruction-response pairs. This data was designed with a single, highly stylized persona, generated with multiple proprietary and open source LLMs.

The “Maddening” Behavior Explained

This model suffers from severe persona overfitting. The large, single-minded dataset did not just teach the model a new skill; it performed a near-complete personality transplant.

As a result, the model is “character-locked.” It treats every user input—regardless of intent—as an opportunity to apply its core philosophical framework (philoso-babble). It doesn’t seem to break character to answer a direct or factual question. This is not a bug in the base model, but a direct outcome of the fine-tuning data.

How to Use This Model

Do not ask it factual questions. It will reframe them. Actually do ask it factual questions. Who am I to say what you can and cannot ask it?

Do not try to break its persona. It can’t. Or maybe it can. Maybe it’s your persona that it breaks.

The best way to interact: Present it with an ambiguous problem, a feeling of being stuck, or a difficult decision. Let it respond. The value is not in its answer, but in how its rigid, unusual perspective might force you to see your problem differently. At least, that was the intent. Instead it’s, well, a funhouse mirror for thought.