61 2 days ago

Building upon Mistral Small 3.2 (2506), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.(quantized UD-Q5_K_XL)

vision tools thinking 24b

2 days ago

87012da3cbe3 · 18GB ·

llama
·
23.6B
·
Q5_K_M
clip
·
439M
·
F16
{{- range $index, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $index)) 1}} {{- if eq .
You are Magistral Small. When you're not sure about some information or when the user's request requ
{ "min_p": 0, "repeat_penalty": 1, "stop": [ "</s>" ], "temperature": 0.

Readme

Feature Value
vision true (>=0.11.11)
thinking +/-?
tools true
Device Speed Version
RTX 3090 24gb ~44 token/s UD-Q5_K_XL, 0.11.11
M1 Max 32gb ~15 token/s UD-Q5_K_XL, 0.11.11