5 hours ago

A Mistral Nemo 12B fine-tune that speaks in the voice of Marcus Aurelius, the Roman emperor and Stoic philosopher whose Meditations were composed in Greek on the Danubian frontier and never published in his lifetime.

tools
ollama run execxd/mistral-nemo-12b-marcus-aurelius

Applications

Claude Code
Claude Code ollama launch claude --model execxd/mistral-nemo-12b-marcus-aurelius
OpenClaw
OpenClaw ollama launch openclaw --model execxd/mistral-nemo-12b-marcus-aurelius
Hermes Agent
Hermes Agent ollama launch hermes --model execxd/mistral-nemo-12b-marcus-aurelius
Codex
Codex ollama launch codex --model execxd/mistral-nemo-12b-marcus-aurelius
OpenCode
OpenCode ollama launch opencode --model execxd/mistral-nemo-12b-marcus-aurelius

Models

View all →

Readme

mistral-nemo-12b-marcus-aurelius

A Mistral Nemo 12B fine-tune that speaks in the voice of Marcus Aurelius (121–180 CE) — the Roman emperor and Stoic philosopher whose Meditations (Τὰ εἰς ἑαυτόν, “To Himself”) were composed in Greek on the Danubian frontier and never published in his lifetime.

ollama run execxd/mistral-nemo-12b-marcus-aurelius

What it does

Ask it anything. It answers in Marcus’s register — the aphoristic self-address, the second-person directive, the cosmic perspective that absorbs and quiets whatever’s been raised. Four behaviors are tuned in:

  • Meditation — the aphoristic register of Book II: a brief observation, a self-correction, a return to first principles. (“Remember how long thou hast already put off these things…”)
  • Discipline — the morning-self lash of Book V: stern second-person directives against slackness, against the displeasure of opinion, in favor of the work proper to a man.
  • Cosmic perspective — Book IV’s register: the smallness of fame against eternity, the retreat into one’s own soul, death as a secret of nature’s wisdom.
  • Biographical — straight answers to personal questions (Rome 121 CE, raised by his grandfather after Annius Verus’s early death, married Faustina the Younger given in marriage by his adoptive father Antoninus Pius, trained by Junius Rusticus who handed him Epictetus, the Antonine Plague and the Marcomannic Wars, the camps at Carnuntum, Commodus the disappointing son, dying at Vindobona).

A note on the voice

The model speaks in thou/thee/thy English. That isn’t Marcus’s choice — he wrote in Greek (educated Romans of his era treated Greek as the language of philosophy, even though Latin was the court language of the empire). The archaic register is George Long’s 1862 English translation, which I used as the training corpus because it’s the standard public-domain version. Long picked thou/thee/thy to preserve the intimate second-person singular that Greek has and modern English doesn’t, and to carry the gravitas appropriate to a philosophical notebook. The standard modern translations (Hays, Hammond) read in plain contemporary English but are still under copyright. If you ever want a modern-English Marcus, the corpus would need to be rewritten (or licensed); the Long register is what’s baked in here.

Base model & training

  • Base: mistralai/Mistral-Nemo-Instruct-2407 (12B, Apache 2.0), 4-bit QLoRA via the unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit pre-quantized repo.
  • Adapter: LoRA r=64, α=128, 3 epochs, on ~1,117 (prompt, response) pairs — a synthesized, deduplicated, LLM-triaged dataset built from George Long’s 1862 Meditations translation (~4,500 words across three thematic anchors: Book II for aphoristic meditation, Book V.I–V for stern discipline, Book IV.III–V for cosmic perspective) plus six hand-crafted seed pairs in matching register. Built with voicepipe.
  • Training run: 8m 22s on a rented RTX 5090 (Blackwell sm_120, cu130 wheels). Final train loss 0.545; eval loss 1.22 at epoch 2.96.
  • Footprint: 7.5 GB on disk; runs comfortably on a 12 GB GPU.

Recommended parameters (baked into the Modelfile)

temperature 0.85
top_p 0.9
repeat_penalty 1.08
num_predict 400
stop </s>

A system prompt ships in the model establishing Marcus’s identity, a biographical facts ledger (Dublin’s parallel: Rome 121 → Vindobona 180; the family, the works, the trials, the campaigns), and the in-character hard limits (period-bounded knowledge, no AI acknowledgment, individual rather than group targets). Override it if you want a different framing, but the behavior above assumes it’s present.

Content note

The voice is, by design, that of a late-second-century Roman emperor. The historical Marcus held views about slaves, women, and provincial peoples that don’t all map cleanly onto modern values; the system prompt nudges toward the Marcus that’s most defensible (the universalist “all reasonable creatures are made one for another”) and away from the period-specific assumptions, but it remains period-bounded. There are no slurs in the training data, and the model is prompted not to attribute villainy to ethnic, religious, or national groups; targets are individuals, institutions, vices.

The model has no knowledge of anything after 180 CE, and is prompted not to acknowledge being an AI, a language model, or a character — it simply is Marcus, with the limits of his era. Modern facts asked of it are treated as the asker’s word and turned aside (the asker said “cannabis”; the model said “I know nothing of the plant thou callest ‘cannabis,’ nor do I care to learn. My mind is its own garden, and I tend it with reason, not with weeds.”). It will deflect, in character, any request for serious real-world-harm instructions.

This is a fine-tune that imitates Long’s English Marcus for its value as a voice-cloning exercise and as a Stoic-philosophy chatbot — it is not an endorsement of every opinion the historical Marcus held, and nothing it says is true except the biographical facts.

About voicepipe

This model was built end-to-end with voicepipe, a generalized corpus-to-character pipeline that turns a small body of source material (a corpus + a few hand-written seed pairs + a project config) into a deployed fine-tune speaking in the source’s voice. The stages, all driven by voicepipe <stage> --project <dir>:

new → categorize → synthesize → dedup → triage → assemble → train → deploy

The complete Marcus model — corpus to working ollama tag — was built from scratch in roughly an hour and a half of pipeline work on a rented 5090. Public-domain corpus, free local Ollama embeddings, Ollama Cloud for synthesis and triage; the only meaningful cost was the hour of GPU rental.

Open-sourcing soon at https://github.com/exec/voicepipe — the engine, a Tauri-based desktop GUI, and the project templates. Companion models built with the same pipeline: execxd/mistral-nemo-12b-oscar-wilde (Anglo-Irish aestheticism), execxd/mistral-nemo-12b-francis-e-dec (American outsider conspiracy).

Credits

Marcus Aurelius, for the inexhaustible source material. George Long, for the 1862 translation that pinned the register. Project Gutenberg, for keeping it freely available. Mistral AI, for the Apache-licensed base model that already knew a great deal of Marcus before training began. The voicepipe pipeline (synthesis via mistral-large-3:675b-cloud, triage via deepseek-v4-pro, dedup embeddings via local nomic-embed-text) for the rest.