Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
novaforgeai
/
novaforge-mistral
:7b-q4km
31
Downloads
Updated
13 hours ago
Quantized Mistral 7B Instruct models optimized for fast, CPU-only local inference with Ollama. Multiple variants balancing speed, quality, and memory efficiency.
Quantized Mistral 7B Instruct models optimized for fast, CPU-only local inference with Ollama. Multiple variants balancing speed, quality, and memory efficiency.
Cancel
novaforge-mistral:7b-q4km
...
/
params
d806a5ef514f · 214B
{
"num_batch": 512,
"num_ctx": 512,
"num_gpu": 0,
"num_predict": 200,
"num_thread": 8,
"repeat_penalty": 1.1,
"stop": [
"<|im_start|>",
"<|im_end|>",
"</s>"
],
"temperature": 0.7,
"top_k": 40,
"top_p": 0.95
}