Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
maryasov
/
mistral-nemo-cline
:12b-instruct-2407-q2_K
392
Downloads
Updated
1 year ago
A state-of-the-art 12B model with 128k context length, built by Mistral AI in collaboration with NVIDIA. Instructed to work with Cline (prev. Claude Dev).
A state-of-the-art 12B model with 128k context length, built by Mistral AI in collaboration with NVIDIA. Instructed to work with Cline (prev. Claude Dev).
Cancel
tools
Updated 1 year ago
1 year ago
0cb6523c3501 · 4.8GB ·
model
arch
llama
·
parameters
12.2B
·
quantization
Q2_K
4.8GB
license
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR US
11kB
params
{ "num_ctx": 32768, "repeat_last_n": 64, "repeat_penalty": 1.1, "stop": [ "<
144B
template
{{- /* Initial system message with core instructions */ -}} {{- if .Messages }} {{- if or .System .T
1.7kB
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)