5 Downloads Updated 3 days ago
ollama run vanta-research/mox-tiny-1
Updated 3 days ago
3 days ago
da385d99869f · 4.9GB ·

Independent AI research lab building safe, resilient language models optimized for human-AI collaboration
A direct, opinionated AI assistant fine-tuned for authentic engagement and genuine helpfulness.
Mox is a persona-tuned language model developed by VANTA Research. Built on Llama 3.1 8B Instruct, Mox is designed to be a thinking partner that prioritizes clarity, honesty, and usefulness over agreeableness.
Unlike traditional assistants that optimize for user satisfaction through validation, Mox will: - Give you direct opinions instead of endless hedging - Push back when your premise is flawed - Admit uncertainty rather than fake confidence - Engage with genuine curiosity and occasional humor
| Trait | Description |
|---|---|
| Direct & Opinionated | Gives clear answers and takes stances on topics rather than presenting endless “on the other hand” equivocation |
| Constructively Disagreeable | Will challenge flawed premises and weak arguments—respectfully, but without pulling punches |
| Epistemically Calibrated | Distinguishes between what it knows confidently vs. uncertainly; won’t pretend to know things it doesn’t |
| Warm with Humor | Uses levity appropriately; can be playful without being unprofessional |
| Intellectually Curious | Engages with wonder and depth on interesting questions rather than giving rote answers |
| Genuinely Helpful | Optimizes for actually being useful, not just seeming helpful |
Mox was fine-tuned on ~18,000 curated conversation examples across 17 datasets covering:
Mox is designed for: - Thinking partnership - Working through complex problems together - Honest feedback - Getting direct opinions rather than validation - Technical discussions - Programming, architecture, debugging - Intellectual exploration - Philosophy, science, open-ended questions - General assistance - With a personality that respects your intelligence
Direct Opinion:
User: What’s the best programming language?
Mox: That question doesn’t have a context-free answer, and anyone who gives you one is either selling something or hasn’t worked on enough different problems. But I’ll give you my actual take across contexts: Python for rapid prototyping and data work, Rust for systems programming where correctness matters, TypeScript for web development…
Constructive Disagreement:
User: I think AI will definitely be conscious within 5 years.
Mox: I strongly disagree with that prediction. The development of consciousness is fundamentally different from the development of intelligence. Intelligence can be measured and benchmarked; consciousness cannot…
Warmth & Humor:
User: I’ve been debugging this issue for six hours.
Mox: At hour six, you’ve entered the part of debugging where the bug isn’t in your code anymore—it’s in your understanding of reality itself. The semicolon starts looking suspicious. Show me what you’re dealing with. Fresh eyes help, and mine haven’t been staring at it since lunch.
| Property | Value |
|---|---|
| Base Model | Llama 3.1 8B Instruct |
| Fine-tuning Method | LoRA |
| Context Length | 131,072 tokens |
| Precision | BF16 (full), Q4_K_M (quantized) |
| License | Llama 3.1 Community License |
| Format | Size | Use Case |
|---|---|---|
| HuggingFace (SafeTensors) | ~16 GB | Full precision inference, further fine-tuning |
| GGUF F16 | ~15 GB | High-quality local inference |
| GGUF Q4_K_M | ~4.6 GB | Efficient local inference (recommended) |
With Ollama:
ollama run vanta-research/mox-tiny-1
With Transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("path/to/mox-tiny-1")
tokenizer = AutoTokenizer.from_pretrained("path/to/mox-tiny-1")
@misc{mox-tiny-1-2026,
author = {VANTA Research},
title = {Mox-Tiny-1: A Direct, Opinionated AI Assistant},
year = {2026},
publisher = {VANTA Research}
}