Name
6 models
merlinite:latest
4.4GB · 32K context window · Text · 1 year ago
merlinite:Q2_K
2.7GB · 32K context window · Text · 1 year ago
merlinite:Q3_K_M
3.5GB · 32K context window · Text · 1 year ago
merlinite:Q4_K_M
4.4GB · 32K context window · Text · 1 year ago
merlinite:Q5_K_M
5.1GB · 32K context window · Text · 1 year ago
merlinite:Q8_0
7.7GB · 32K context window · Text · 1 year ago
Quantized versions of Merlinite-7B, a Mistral-7b-derivative model trained with the LAB methodology by IBM Research, using Mixtral-8x7b-Instruct as a teacher model. For more info see here.