Quantized version of DeepSeek Coder v1.5 and Q8_0_L quantization of v2 model form bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF and bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
7B
209 Pulls Updated 3 months ago
Updated 6 months ago
6 months ago
2757e7fdb043 · 4.0GB
model
archllama
·
parameters6.91B
·
quantizationQ4_0
4.0GB
template
{{ .Prompt }}
13B
Readme
No readme