A state-of-the-art 12B model with 128k context length, built by Mistral AI in collaboration with NVIDIA.
tools
12b
752.6K Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
2c2b8e074f09 · 7.1GB
model
archllama
·
parameters12.2B
·
quantizationQ4_K_S
7.1GB
params
{
"stop": [
"[INST]",
"[/INST]"
]
}
30B
template
{{- range $i, $_ := .Messages }}
{{- if eq .Role "user" }}
{{- if and $.Tools (le (len (slice $.Mess
688B
license
Apache License
Version 2.0, January 2004
11kB
Readme
Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B.