Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
artcx001
/
Ministral-8B-Instruct-2410
:Q2_K
263
Downloads
Updated
11 months ago
Working Ministral models
Working Ministral models
Cancel
Updated 11 months ago
11 months ago
393abc5870b9 · 3.2GB ·
model
arch
llama
·
parameters
8.02B
·
quantization
Q2_K
3.2GB
template
[INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]
67B
license
Mistral AI Research License If You want to use a Mistral Model, a Derivative or an Output for any pu
11kB
params
{ "stop": [ "[INST]", "[/INST]" ] }
30B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)