1 Download Updated 5 days ago
SenseNova-SI-2B is a GGUF conversion of sensenova/sensenova-si for llama.cpp / Ollama.
Upstream: https://huggingface.co/sensenova/sensenova-si
Alias matches existing local artifacts; adjust if needed.
ChatMLqwen22B32768apache-2.0| Tag | GGUF | Size | RAM (est.) | Notes |
|---|---|---|---|---|
IQ4_XS |
SenseNova-SI-2B-IQ4_XS.gguf |
0.96 GiB | 2 GiB | |
Q4_K_M |
SenseNova-SI-2B-Q4_K_M.gguf |
1.04 GiB | 3 GiB | Recommended |
ollama run richardyoung/sensenova-si-2b:q4_k_m "Hello!"
ollama run richardyoung/sensenova-si-2b:iq4_xsollama run richardyoung/sensenova-si-2b:q4_k_mSee the upstream repo for license/terms: https://huggingface.co/sensenova/sensenova-si
llama-quantize).convert_hf_to_gguf.py).