Quantized version of DeepSeek Coder v1.5 and Q8_0_L quantization of v2 model form bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF and bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
255 Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
d9457e7571b6 · 17GB
model
archdeepseek2
·
parameters15.7B
·
quantizationQ8_0
17GB
template
{{ if .System }}{{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assistant:{{ .
138B
Readme
No readme