olmo2:13b-1124-instruct-fp16

2.6M 8 months ago

OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

7b 13b

8 months ago

c5cd17f69ca0 · 27GB

olmo2
·
13.7B
·
F16
You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} <|{{ .Role }}|> {
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR US

Readme

Note: this model requires Ollama 0.5.5

1732650119-wide-4x.webp

OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

References

Blog post