879 1 year ago

Quantized version of DeepSeek Coder v1.5 and Q8_0_L quantization of v2 model form bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF and bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF

Models

View all →

14 models

deepseek-coder:7b-base-v1.5-q4_k

4.2GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-base-v1.5-q5_k

4.9GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-base-v1.5-q4_0

4.0GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-base-v1.5-q6_k

5.7GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-base-v1.5-q8_0

7.3GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-base-v1.5-f16

14GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-instruct-v1.5-q4_k

4.2GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-instruct-v1.5-q5_k

4.9GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-instruct-v1.5-q4_0

4.0GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-instruct-v1.5-q6_k

5.7GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-instruct-v1.5-q8_0

7.3GB · 4K context window · Text · 1 year ago

deepseek-coder:7b-instruct-v1.5-f16

14GB · 4K context window · Text · 1 year ago

deepseek-coder:16b-base-v2-q8_0_l

17GB · 4K context window · Text · 1 year ago

deepseek-coder:16b-instruct-v2-q8_0_l

17GB · 4K context window · Text · 1 year ago

Readme

No readme