quantized versions of mlabonne/NeuralBeagle14-7B

7B

119 Pulls Updated 7 months ago

Readme

q4_K_M, q6_K and q8_0 quantized versions of mlabonne/NeuralBeagle14-7B

Supports up to 8K context. Modelfile is configured for 4K.