82 Pulls Updated 3 months ago
Updated 4 months ago
4 months ago
1fd1940effe9 · 8.6GB
model
archdeepseek2
·
parameters15.7B
·
quantizationIQ1_M
8.6GB
template
{{ if .System }}System: {{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assist
137B
Readme
Models: DeepSeek Coder V2 Instruct Lite Quantized with IQ4_XS (https://ollama.com/akuldatta/deepseek-coder-v2-lite) and Q5_K_S (https://ollama.com/akuldatta/deepseek-coder-v2-lite:q5ks) quants.