Ollama
Models GitHub Discord Docs Cloud
Sign in Download
Models Download GitHub Discord Docs Cloud Sign in
⇅
deepseek-v3 · Ollama Search
Search for models on Ollama.
  • deepseek-v3.1

    DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking cloud 671b

    146.7K  Pulls 8  Tags Updated  1 month ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2.7M  Pulls 5  Tags Updated  10 months ago

  • huihui_ai/deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    7,115  Pulls 2  Tags Updated  9 months ago

  • huihui_ai/deepseek-v3-abliterated

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2,728  Pulls 5  Tags Updated  7 months ago

  • milkey/deepseek-v3-UD

    (Unsloth Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    2,012  Pulls 3  Tags Updated  9 months ago

  • huihui_ai/deepseek-v3-pruned

    DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

    411b

    1,261  Pulls 5  Tags Updated  7 months ago

  • 8b-wraith/deepseek-v3-0324

    deepseek-v3-0324-Quants. - Q2_K is the lowest here - quantized = round((original - zero_point) / scale)

    1,009  Pulls 1  Tag Updated  7 months ago

  • MFDoom/deepseek-v3-tool-calling

    tools 671b

    900  Pulls 2  Tags Updated  9 months ago

  • lwk/v3

    ollama run deepseek-v3

    tools

    800  Pulls 1  Tag Updated  9 months ago

  • xiaowangge/deepseek-v3-qwen2.5

    This model has been developed based on DistilQwen2.5-DS3-0324-Series.

    tools 32b

    789  Pulls 7  Tags Updated  6 months ago

  • sunny-g/deepseek-v3-0324

    dynamic quants from unsloth, merged

    291  Pulls 1  Tag Updated  7 months ago

  • org/deepseek-v3-fast

    Single file version with (Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    90  Pulls 4  Tags Updated  8 months ago

  • huihui_ai/deepseek-v3.1

    This is not the ablation version. DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking 671b

    75  Pulls 3  Tags Updated  2 months ago

  • pdevine/deepseek-v3.1

    cloud

    26  Pulls 2  Tags Updated  1 month ago

  • lucataco/deepseek-v3-64k

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    19  Pulls 1  Tag Updated  10 months ago

  • clore/deepseek-v3.1

    12  Pulls 1  Tag Updated  2 months ago

  • ukjin/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill

    This model is a distilled version of Qwen/Qwen3-30B-A3B-Instruct designed to inherit the reasoning and behavioral characteristics of its much larger teacher model, deepseek-ai/DeepSeek-V3.1.

    tools thinking 4b

    642  Pulls 2  Tags Updated  2 months ago

  • haghiri/DeepSeek-V3-0324

    Merged Unsloth's Dynamic Quantization

    1,339  Pulls 1  Tag Updated  7 months ago

  • lsm03624/deepseek-r1

    DeepSeek-R1-0528 仍然使用 2024 年 12 月所发布的 DeepSeek V3 Base 模型作为基座,但在后训练过程中投入了更多算力,显著提升了模型的思维深度与推理能力。这个8B精馏版本编程能力都爆表!

    thinking

    797  Pulls 1  Tag Updated  5 months ago

  • chsword/DeepSeek-V3

    tools

    616  Pulls 1  Tag Updated  10 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.