Ollama
Models GitHub Discord Turbo
Sign in Download
Models Download GitHub Discord Sign in
⇅
deepseek · Ollama Search
Search for models on Ollama.
  • deepseek-r1

    DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.

    tools thinking 1.5b 7b 8b 14b 32b 70b 671b

    61.9M  Pulls 35  Tags Updated  2 months ago

  • deepseek-v3.1

    DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking 671b

    45.8K  Pulls 4  Tags Updated  2 weeks ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2.3M  Pulls 5  Tags Updated  8 months ago

  • deepseek-coder

    DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.

    1.3b 6.7b 33b

    1.2M  Pulls 102  Tags Updated  1 year ago

  • deepseek-coder-v2

    An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

    16b 236b

    1.1M  Pulls 64  Tags Updated  1 year ago

  • deepseek-llm

    An advanced language model crafted with 2 trillion bilingual tokens.

    7b 67b

    202.8K  Pulls 64  Tags Updated  1 year ago

  • deepseek-v2

    A strong, economical, and efficient Mixture-of-Experts language model.

    16b 236b

    185.8K  Pulls 34  Tags Updated  1 year ago

  • deepseek-v2.5

    An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.

    236b

    66.2K  Pulls 7  Tags Updated  1 year ago

  • openthinker

    A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.

    7b 32b

    597K  Pulls 15  Tags Updated  5 months ago

  • deepscaler

    A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.

    1.5b

    327.6K  Pulls 5  Tags Updated  7 months ago

  • r1-1776

    A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.

    70b 671b

    106.6K  Pulls 9  Tags Updated  6 months ago

  • deepseek-140B/DeepSeekAI140B

    5,606  Pulls 1  Tag Updated  7 months ago

  • erwan2/DeepSeek-Janus-Pro-7B

    5.4M  Pulls 1  Tag Updated  7 months ago

  • huihui_ai/deepseek-r1-abliterated

    DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.

    thinking 1.5b 7b 8b 14b 32b 70b

    564.1K  Pulls 55  Tags Updated  3 months ago

  • ishumilin/deepseek-r1-coder-tools

    This is a modified model that adds support for autonomous coding agents like Cline

    tools 1.5b 7b 8b 14b 32b 70b

    554.6K  Pulls 6  Tags Updated  6 months ago

  • secfa/DeepSeek-R1-UD-IQ1_S

    Unsloth's DeepSeek-R1 , I just merged the thing and uploaded it here. This is the full 671b model. MoE Bits:1.58bit Type:UD-IQ1_S Disk Size:131GB Accuracy:Fair Details:MoE all 1.56bit. down_proj in MoE mixture of 2.06/1.56bit

    170.9K  Pulls 2  Tags Updated  7 months ago

  • SIGJNF/deepseek-r1-671b-1.58bit

    Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.

    101K  Pulls 1  Tag Updated  7 months ago

  • Huzderu/deepseek-r1-671b-2.51bit

    Merged GGUF Unsloth's DeepSeek-R1 671B 2.51bit dynamic quant

    60.4K  Pulls 1  Tag Updated  7 months ago

  • Huzderu/deepseek-r1-671b-1.73bit

    Merged GGUF Unsloth's DeepSeek-R1 671B 1.73bit dynamic quant

    26.7K  Pulls 1  Tag Updated  7 months ago

  • thirdeyeai/DeepSeek-R1-Distill-Qwen-7B-uncensored

    25.4K  Pulls 4  Tags Updated  4 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.