362 Downloads Updated 1 year ago
Updated 1 year ago
1 year ago
32c2273d5a93 · 8.9GB ·
base_model: nvidia/Mistral-NeMo-Minitron-8B-Base inference: false library_name: gguf license: other license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf license_name: nvidia-open-model-license pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit
Llama.cpp imatrix quantization of nvidia/Mistral-NeMo-Minitron-8B-Base
Original Model: nvidia/Mistral-NeMo-Minitron-8B-Base
Original dtype: BF16 (bfloat16)
Quantized by: llama.cpp b3613
IMatrix dataset: here
Status: ✅ Available
Link: here
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|---|---|---|---|---|---|
| Mistral-NeMo-Minitron-8B-Base.Q8_0.gguf | Q8_0 | 8.95GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q6_K.gguf | Q6_K | 6.91GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q4_K.gguf | Q4_K | 5.15GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q3_K.gguf | Q3_K | 4.21GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q2_K.gguf | Q2_K | 3.33GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|---|---|---|---|---|---|
| Mistral-NeMo-Minitron-8B-Base.BF16.gguf | BF16 | 16.84GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.FP16.gguf | F16 | 16.84GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q8_0.gguf | Q8_0 | 8.95GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q6_K.gguf | Q6_K | 6.91GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q5_K.gguf | Q5_K | 6.00GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q5_K_S.gguf | Q5_K_S | 5.86GB | ✅ Available | ⚪ Static | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q4_K.gguf | Q4_K | 5.15GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q4_K_S.gguf | Q4_K_S | 4.91GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ4_NL.gguf | IQ4_NL | 4.90GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ4_XS.gguf | IQ4_XS | 4.66GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q3_K.gguf | Q3_K | 4.21GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q3_K_L.gguf | Q3_K_L | 4.54GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q3_K_S.gguf | Q3_K_S | 3.83GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ3_M.gguf | IQ3_M | 3.98GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ3_S.gguf | IQ3_S | 3.86GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ3_XS.gguf | IQ3_XS | 3.68GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ3_XXS.gguf | IQ3_XXS | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q2_K.gguf | Q2_K | 3.33GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.Q2_K_S.gguf | Q2_K_S | 3.13GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ2_M.gguf | IQ2_M | 3.10GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ2_S.gguf | IQ2_S | 2.90GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ2_XS.gguf | IQ2_XS | 2.73GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ2_XXS.gguf | IQ2_XXS | 2.51GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ1_M.gguf | IQ1_M | 2.27GB | ✅ Available | 🟢 IMatrix | 📦 No |
| Mistral-NeMo-Minitron-8B-Base.IQ1_S.gguf | IQ1_S | 2.12GB | ✅ Available | 🟢 IMatrix | 📦 No |
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/Mistral-NeMo-Minitron-8B-Base-IMat-GGUF --include "Mistral-NeMo-Minitron-8B-Base.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/Mistral-NeMo-Minitron-8B-Base-IMat-GGUF --include "Mistral-NeMo-Minitron-8B-Base.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
llama.cpp/main -m Mistral-NeMo-Minitron-8B-Base.Q8_0.gguf --color -i -p "prompt here"
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split available
gguf-split, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-splitMistral-NeMo-Minitron-8B-Base.Q8_0)gguf-split --merge Mistral-NeMo-Minitron-8B-Base.Q8_0/Mistral-NeMo-Minitron-8B-Base.Q8_0-00001-of-XXXXX.gguf Mistral-NeMo-Minitron-8B-Base.Q8_0.gguf
gguf-split to the first chunk of the split.Got a suggestion? Ping me @legraphista!