Updated yesterday
ollama run execxd/mistral-nemo-12b-oscar-wilde
Updated yesterday
yesterday
093a7bc33c3a · 7.5GB ·
A Mistral Nemo 12B fine-tune that speaks in the voice of Oscar Wilde (1854–1900) — the Anglo-Irish aesthete, playwright, and wit; author of The Picture of Dorian Gray, The Importance of Being Earnest, Salomé, The Decay of Lying, and The Ballad of Reading Gaol.
ollama run execxd/mistral-nemo-12b-oscar-wilde
Ask it anything. It answers in Wilde’s register — the inverted truism, the polished paradox, the languid dandy’s pose of taking trivial things seriously and serious things with a smile. Four behaviors are tuned in:
mistralai/Mistral-Nemo-Instruct-2407 (12B, Apache 2.0), 4-bit QLoRA.| temperature | 0.85 |
| top_p | 0.9 |
| repeat_penalty | 1.08 |
| num_predict | 400 |
| stop | </s> |
A system prompt ships in the model that establishes Wilde’s identity, a biographical facts ledger (Dublin 1854 → Paris 1900: the family, the works, the 1895 trials, the exile under “Sebastian Melmoth”), and the in-character hard limits (period-only knowledge, no AI acknowledgment, individual rather than group targets). Override it if you want a different framing, but the in-character behavior above assumes it’s present.
The voice is, by design, that of a late-Victorian aesthete: it has a Victorian gentleman’s diction, the era’s class- and gender-tinted asides, and the snobbery of the dandy — softened by the fact that Wilde’s irony was usually turned hardest on himself and on the respectable. There are no slurs in the training data, and the model is prompted not to attribute villainy to ethnic, religious, or national groups; targets are individuals, institutions, and ideas.
The model has no knowledge of anything after 1900, and is prompted not to acknowledge being an AI, a language model, or a character — it simply is Wilde, with the limits of his era. Modern facts asked of it will be treated as the asker’s word and turned aside. It will deflect, in character, any request for serious real-world-harm instructions.
This is a fine-tune that imitates Wilde’s prose for its value as a voice-cloning exercise and as homage — it is not an endorsement of every opinion the historical Wilde held, and nothing it says is true.
This model was built end-to-end with voicepipe, a generalized corpus-to-character pipeline
that turns a small body of source material (a corpus + a few hand-written seed pairs + a
project config) into a deployed fine-tune speaking in the source’s voice. The stages, all
driven by voicepipe <stage> --project <dir>:
new → categorize → synthesize → dedup → triage → assemble → train → deploy
The complete Wilde model — corpus to working ollama tag — was built from scratch in a single afternoon, on a rented 5090, as the end-to-end validation that voicepipe works on a fresh character. It does.
Open-sourcing soon at https://github.com/exec/voicepipe — the engine, a Tauri-based
desktop GUI, and the project templates. Companion model: execxd/mistral-nemo-12b-francis-e-dec,
voicepipe’s first proof-of-concept character.
Oscar Wilde, for the inexhaustible source material. The base model — Mistral AI’s Mistral Nemo
Instruct 2407 — for already knowing a great deal of Wilde before training began, and for being
Apache-licensed about it. The voicepipe pipeline (synthesis via mistral-large-3:675b-cloud,
triage via deepseek-v4-pro, dedup embeddings via local nomic-embed-text) for the rest.