A strong, economical, and efficient Mixture-of-Experts language model.

16b 236b

68K 5 months ago

Readme

Note: this model requires Ollama 0.1.40.

DeepSeek-V2 is a a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

Note: this model is bilingual in English and Chinese.

The model comes in two sizes:

  • 16B Lite: ollama run deepseek-v2:16b
  • 236B: ollama run deepseek-v2:236b

References

GitHub