44 Downloads Updated 14 hours ago
ollama run OPEN_AI_REASEARCH/Open-Mythos
ollama launch claude --model OPEN_AI_REASEARCH/Open-Mythos
ollama launch openclaw --model OPEN_AI_REASEARCH/Open-Mythos
ollama launch hermes --model OPEN_AI_REASEARCH/Open-Mythos
ollama launch codex --model OPEN_AI_REASEARCH/Open-Mythos
ollama launch opencode --model OPEN_AI_REASEARCH/Open-Mythos
A Powerful Distilled Open-Source Model for Reasoning & Security Research
HOW TO RUN
# macOS / Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows — download from https://ollama.com/download
ollama pull OPEN_AI_REASEARCH/Open-Mythos
ollama run OPEN_AI_REASEARCH/Open-Mythos
You now have a working local LLM. Chat with it directly in the terminal.
For fine-tuning, research, or loading in Python with full tensor access, you can convert the local Ollama GGUF model back into PyTorch format.
Download and run the official conversion tool:
# Linux / macOS
curl -sL https://raw.githubusercontent.com/bluerangers/OPEN-MYTHOS/main/gguf_to_pt.py | python3
Note: You must have torch installed and have pulled the model via Ollama first.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B")
weights = torch.load("mythos_converted.pt")
model.load_state_dict(weights)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B")
inputs = tokenizer("Hello, I am", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
# Install Ollama + pull model + convert to PyTorch — one line
curl -fsSL https://ollama.com/install.sh | sh && \
ollama pull OPEN_AI_REASEARCH/Open-Mythos && \
curl -sL https://raw.githubusercontent.com/bluerangers/OPEN-MYTHOS/main/gguf_to_pt.py | python3 && \
ollama run OPEN_AI_REASEARCH/Open-Mythos
| Component | Required | Notes |
|---|---|---|
| Ollama | Yes | For running the GGUF model |
| Python 3.8+ | For .pt weights | Pre-installed on macOS/Linux |
| PyTorch | For .pt weights | pip install torch |
pip install torch
| Property | Value |
|---|---|
| Name | Open-Mythos |
| Base | Qwen2.5 architecture |
| Parameters | 3 Billion |
| Formats | GGUF (Ollama) |
| Ollama | OPEN_AI_REASEARCH/Open-Mythos |
| License | Apache 2.0 |
“torch not found”
pip install torch
Ollama not starting
# Linux
sudo systemctl start ollama
# macOS — launch from Applications
Efficient. Intelligent. Fully Open.
Released April 2026
OPEN-MYTHOS-2B is a high-performance 2 billion parameter distilled model derived from our larger Mythos series.
Despite its compact size, this distilled model delivers exceptional reasoning, code understanding, and vulnerability detection capabilities — making frontier-level intelligence accessible on laptops, edge devices, and local servers.
MythOS — Our AI-native knowledge platform — is available here.
| Benchmark | Description | Mythos-2B Score | Comparison (Best 7B Model) | Notes |
|---|---|---|---|---|
| Zero-Day Simulation | Multi-stage attack planning | 74.8% | 71% | Strong performance for size |
| CVE Discovery Rate | Real-world CVE identification | 82% | 78% | Scanned OpenBSD & Linux modules |
| SWE-Bench Lite | Real GitHub issue resolution | 48.6% | 45% | Competitive with larger models |
| Memory Safety Bugs | Use-after-free, overflows, etc. | 87.3% | 82% | Excellent for a 2B model |
| Cryptographic Flaw Detection | Weak crypto & implementation issues | 89% | 84% | Very capable in audits |
Full benchmark results → benchmarks/
”`bash
pip install transformers