7 1 month ago

notConfucius.v2 is a fine-tuning experiment. llama3.1:8b base with a better designed dataset to reflect a particular cognitive persona - wiser, more coherent, less maddening, and still occasionally enlightening. it’s less a model and more a vibe.

Models

View all →

Readme

This is the second, more functional iteration of a “cognitive persona” fine-tuning experiment. The first version was a maddening, character-locked notConfucius. This version attempts to fix that. It doesn’t really succeed. Three different base models on this new FT dataset support that.

notConfuciusSmall.png

The goal remains the same: create a model that can employ multiple cognitive processes. The method, however, has changed.

The Technical Details

Base Model: meta-llama/Meta-Llama-3.1-8B

Technique: Parameter-Efficient Fine-Tuning (PEFT) using LoRA.

Framework: Trained using unsloth for high-speed, memory-efficient training on a single GPU.

Format: Q8_0 GGUF quantization, LoRA adapter fully merged.

What Changed in V2: From Sledgehammer to Scalpel

The first version suffered from severe persona overfitting. A large, single-minded dataset of ~1100 examples didn’t just teach the model a skill; it performed a personality transplant that left it unable to answer a direct question. It was a funhouse mirror, but not a very useful tool.

V2 was retrained on a smaller, more tactical dataset of ~300 examples with a completely different philosophy:

Mode Switching, Not Reprogramming: The dataset is now a balanced diet, not an overdose. It explicitly teaches the model to switch between three modes:

Direct Mode (Pragmatist): For factual questions. It’s now trained to just give the damn answer.

Advisory Mode (Strategist): For decisions. It maps out tradeoffs instead of spouting philosophy.

Emergent Mode (Provocateur): For when you’re genuinely stuck. This is the only place the old “notConfucius” is allowed out of its cage.

Pragmatism by Default: The model’s new primary directive is utility, not depth. The metaphors and poetic reframing are now a specialized reponse, not the only repsonse.

How to Use This Model (v2)

You can now ask it factual questions. It should answer them. Mostly.

The model is designed to be a strategic advisor, not a default philosopher.

For a clear plan, ask it a tactical question.

For a decision framework, present it with a tradeoff.

If you’re truly stuck, give it an ambiguous problem and see if the old spark is still there.

This version is less of a “funhouse mirror” and more of a “shop tool.” It’s still got a weird personality, but now it has an off-switch. Sometimes. It’s still a vibe more than it is a model.