Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
LoTUs5494
/
mistral-small-3.1
:Q4_K_L
1,115
Downloads
Updated
8 months ago
Mistral-Small-3.1-24B-Instruct-2503 is a quantized GGUF variant of Mistral-small 3.1 24B Instruct (2503), optimized to run smoothly on GPUs with as low as 16GB VRAM.
Mistral-Small-3.1-24B-Instruct-2503 is a quantized GGUF variant of Mistral-small 3.1 24B Instruct (2503), optimized to run smoothly on GPUs with as low as 16GB VRAM.
Cancel
tools
24b
mistral-small-3.1:Q4_K_L
...
/
params
36c4a22018d6 · 55B
{
"stop": [
"<s>",
"[INST]"
],
"temperature": 0.15
}