14 1 month ago

Allen Institute's Latest tiny LLM in the Olmo family

1b

Models

View all →

6 models

olmo2:1b

1.6GB · 4K context window · Text · 2 months ago

Readme

This 1B parameter instruct model is light enough for near‑instant local response while still competent for math and reasoning due to RLVR fine‑tuning.

OLMo 2 1B Instruct (Apr 2025) — compact, open, and tuned for reasoning-heavy prompts. Post-trained on Tülu 3 plus DPO and RLVR‑MATH, making it stronger on problem‑solving than most 1B models.
Ideal for:
- Lightweight chat + reasoning on CPU/low‑VRAM setups
- Math and structured Q&A (GSM8K, MATH)
- General instruct tasks where latency matters


References

HuggingFace