Ollama
Models GitHub Discord Docs Cloud
Sign in Download
Models Download GitHub Discord Docs Cloud Sign in
⇅
deepseek v3 · Ollama Search
Search for models on Ollama.
  • deepseek-v3.1

    DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking cloud 671b

    76.7K  Pulls 8  Tags Updated  yesterday

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2.3M  Pulls 5  Tags Updated  8 months ago

  • nezahatkorkmaz/deepseek-v3

    DeepSeek-V3 from Huggingface: Your powerful solution for handling complex requests and advanced coding tasks. Enhance your development workflow with state-of-the-art code assistance and intelligent problem-solving capabilities.

    tools

    19K  Pulls 1  Tag Updated  9 months ago

  • huihui_ai/deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    7,099  Pulls 2  Tags Updated  8 months ago

  • huihui_ai/deepseek-v3-abliterated

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2,080  Pulls 5  Tags Updated  5 months ago

  • milkey/deepseek-v3-UD

    (Unsloth Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    2,005  Pulls 3  Tags Updated  7 months ago

  • haghiri/DeepSeek-V3-0324

    Merged Unsloth's Dynamic Quantization

    1,322  Pulls 1  Tag Updated  6 months ago

  • huihui_ai/deepseek-v3-pruned

    DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

    411b

    1,250  Pulls 5  Tags Updated  6 months ago

  • 8b-wraith/deepseek-v3-0324

    deepseek-v3-0324-Quants. - Q2_K is the lowest here - quantized = round((original - zero_point) / scale)

    975  Pulls 1  Tag Updated  6 months ago

  • MFDoom/deepseek-v3-tool-calling

    tools 671b

    895  Pulls 2  Tags Updated  8 months ago

  • lwk/v3

    ollama run deepseek-v3

    tools

    792  Pulls 1  Tag Updated  7 months ago

  • lsm03624/deepseek-r1

    DeepSeek-R1-0528 仍然使用 2024 年 12 月所发布的 DeepSeek V3 Base 模型作为基座,但在后训练过程中投入了更多算力,显著提升了模型的思维深度与推理能力。这个8B精馏版本编程能力都爆表!

    thinking

    752  Pulls 1  Tag Updated  4 months ago

  • xiaowangge/deepseek-v3-qwen2.5

    This model has been developed based on DistilQwen2.5-DS3-0324-Series.

    tools 32b

    678  Pulls 7  Tags Updated  4 months ago

  • chsword/DeepSeek-V3

    tools

    598  Pulls 1  Tag Updated  8 months ago

  • lordoliver/DeepSeek-V3-0324

    DeepSeep V3 from March 2025 Merged from Unsloth's HF - 671B params - Q8_0/713 GB & Q4_K_M/404 GB

    671b

    586  Pulls 4  Tags Updated  6 months ago

  • sunny-g/deepseek-v3-0324

    dynamic quants from unsloth, merged

    290  Pulls 1  Tag Updated  6 months ago

  • mo7art/DeepSeek-V3-0324

    Latest DeepSeek_V3 model Q4

    224  Pulls 1  Tag Updated  5 months ago

  • ukjin/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill

    This model is a distilled version of Qwen/Qwen3-30B-A3B-Instruct designed to inherit the reasoning and behavioral characteristics of its much larger teacher model, deepseek-ai/DeepSeek-V3.1.

    tools thinking 4b

    215  Pulls 2  Tags Updated  2 weeks ago

  • org/deepseek-v3-fast

    Single file version with (Dynamic Quants) A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    85  Pulls 4  Tags Updated  7 months ago

  • Hanversion/MAGA-T1-Tieba-1.5B-Distill

    This is a distill model that trained from the dataset of TieBa latest. Used about 8k data and think chain from DeepSeek-V3.

    47  Pulls 1  Tag Updated  6 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.