huihui_ai/
deepseek-v3-pruned:411b-coder-0324

1,244 5 months ago

DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

411b

5 months ago

757ec0f0d54c · 257GB

deepseek2
·
426B
·
Q4_K_M
MIT License Copyright (c) 2023 DeepSeek Permission is hereby granted, free of charge, to any person
{ "num_gpu": 1, "stop": [ "<|begin▁of▁sentence|>", "<|end▁of▁s
{{- if .System }}{{ .System }}{{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice

Readme

Parameter description

1. num_gpu
The value of num_gpu inside the model is 1, which means it defaults to loading one layer. All others will be loaded into CPU memory. You can modify num_gpu according to your GPU configuration.

/set parameter num_gpu 2

2. num_thread
“num_thread” refers to the number of cores in your computer, and it’s recommended to use half of that, Otherwise, the CPU will be at 100%.

/set parameter num_thread 32

3. num_ctx
“num_ctx” for ollama refers to the number of context slots or the number of contexts the model can maintain during inference.

/set parameter num_ctx 4096

References

huihui-ai/DeepSeek-V3-Pruned-Coder-411B

huihui-ai/DeepSeek-V3-0324-Pruned-Coder-411B

Donation

You can follow x.com/support_huihui to get the latest model information from huihui.ai.

Your donation helps us continue our further development and improvement, a cup of coffee can do it.
  • bitcoin:
  bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge