82 Pulls Updated 3 months ago
Updated 3 months ago
3 months ago
f1a7478de823 · 9.5GB
model
archdeepseek2
·
parameters15.7B
·
quantizationQ4_K_S
9.5GB
params
{"stop":["System:","User:","Assistant:","\u003c|begin_of_text|\u003e"]}
72B
template
{{ if .System }}System: {{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assist
122B
Readme
Models: DeepSeek Coder V2 Instruct Lite Quantized with IQ4_XS (https://ollama.com/akuldatta/deepseek-coder-v2-lite) and Q5_K_S (https://ollama.com/akuldatta/deepseek-coder-v2-lite:q5ks) quants.