Quantized version of DeepSeek Coder v1.5 and Q8_0_L quantization of v2 model form bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF and bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
250 Pulls Updated 4 months ago
Updated 7 months ago
7 months ago
b400f9810a46 · 4.2GB
model
archllama
·
parameters6.91B
·
quantizationQ4_K_M
4.2GB
template
{{ .Prompt }}
13B
Readme
No readme