Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
novaforgeai
/
novaforge-mistral
31
Downloads
Updated
12 hours ago
Quantized Mistral 7B Instruct models optimized for fast, CPU-only local inference with Ollama. Multiple variants balancing speed, quality, and memory efficiency.
Quantized Mistral 7B Instruct models optimized for fast, CPU-only local inference with Ollama. Multiple variants balancing speed, quality, and memory efficiency.
Cancel
Name
3 models
Size
Context
Input
novaforge-mistral:7b-q2k
92a16477a1f2
• 2.7GB • 32K context window •
Text input • 12 hours ago
Text input • 12 hours ago
novaforge-mistral:7b-q2k
2.7GB
32K
Text
92a16477a1f2
· 12 hours ago
novaforge-mistral:7b-q3km
3b537ab1f812
• 3.5GB • 32K context window •
Text input • 12 hours ago
Text input • 12 hours ago
novaforge-mistral:7b-q3km
3.5GB
32K
Text
3b537ab1f812
· 12 hours ago
novaforge-mistral:7b-q4km
930ceb14a6c5
• 4.4GB • 32K context window •
Text input • 13 hours ago
Text input • 13 hours ago
novaforge-mistral:7b-q4km
4.4GB
32K
Text
930ceb14a6c5
· 13 hours ago