Quantized version of DeepSeek Coder v1.5 and Q8_0_L quantization of v2 model form bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF and bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
255 Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
bafae5db2753 · 17GB
model
archdeepseek2
·
parameters15.7B
·
quantizationQ8_0
17GB
template
{{ .Prompt }}
13B
Readme
No readme