3,810 Downloads Updated 1 year ago
Updated 1 year ago
1 year ago
148f72efeaf9 · 5.9GB ·
This is a GGUF version of BramVanroy/GEITje-7B-ultra, a powerful Dutch chatbot, which ultimately is Mistral-based model, further pretrained on Dutch and additionally treated with supervised-finetuning and DPO alignment. For more information on the model, data, licensing, usage, see the main model’s README.
If you use GEITje 7B Ultra (SFT) or any of its derivatives or quantizations, place cite the following paper:
@misc{vanroy2024geitje7bultraconversational,
title={GEITje 7B Ultra: A Conversational Model for Dutch},
author={Bram Vanroy},
year={2024},
eprint={2412.04092},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.04092},
}
Available quantization types and expected performance differences compared to base f16
, higher perplexity=worse (from llama.cpp):
Q3_K_M : 3.07G, +0.2496 ppl @ LLaMA-v1-7B
Q4_K_M : 3.80G, +0.0532 ppl @ LLaMA-v1-7B
Q5_K_M : 4.45G, +0.0122 ppl @ LLaMA-v1-7B
Q6_K : 5.15G, +0.0008 ppl @ LLaMA-v1-7B
Q8_0 : 6.70G, +0.0004 ppl @ LLaMA-v1-7B
F16 : 13.00G @ 7B
Also available on ollama.
Quants were made with release b2777
of llama.cpp.
You can use this model in LM Studio, an easy-to-use interface to locally run optimized models. Simply search for BramVanroy/GEITje-7B-ultra-GGUF
, and download the available file.
The model is available on ollama
.