90 1 month ago

Building upon Mistral Small 3.2 (2506) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance. With 24 billion parameters, this model achieves top-t

vision tools

1 month ago

ab03e5e9fa00 · 15GB ·

llama
·
23.6B
·
Q4_K_M
clip
·
439M
·
F16
You are Mistral Small 3.2, a Large Language Model (LLM) created by Mistral AI, a French startup head
{ "min_p": 0, "num_ctx": 40960, "repeat_penalty": 1, "stop": [ "</s>" ],
{{- range $index, $_ := .Messages }} {{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTE

Readme

Mistral-Small-3.2-24B-Instruct-2506

Mistral-Small-3.2-24B-Instruct-2506 is a minor update of Mistral-Small-3.1-24B-Instruct-2503.

Small-3.2 improves in the following categories: - Instruction following: Small-3.2 is better at following precise instructions - Repetition errors: Small-3.2 produces less infinite generations or repetitive answers - Function calling: Small-3.2’s function calling template is more robust (see here and examples)

In all other categories Small-3.2 should match or slightly improve compared to Mistral-Small-3.1-24B-Instruct-2503.

Key Features