olmo2:7b-1124-instruct-fp16

1.1M 4 months ago

OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

7b 13b

4 months ago

fa483f2d5cc7 · 15GB

olmo2
·
7.3B
·
F16
You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
Apache License Version 2.0, January 2004
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} <|{{ .Role }}|> {

Readme

Note: this model requires Ollama 0.5.5

1732650119-wide-4x.webp

OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

References

Blog post