974 9 months ago

Mistral Small 3.1 24B quantization from bartowski Q6_K with tool support

tools

9 months ago

c1a3db3b81c6 · 19GB ·

llama
·
23.6B
·
Q6_K
{{- range $index, $_ := .Messages }} {{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTE
{ "num_ctx": 4096, "stop": [ "</s>", "[INST]", "[/INST]" ] }

Readme

The other mistral small 3.1 24B models on here didn’t have Q6 quantization and tool support. This one runs on a 3090 in vram with nice results.