Ollama
Models GitHub Discord Docs Cloud
Sign in Download
Models Download GitHub Discord Docs Cloud Sign in
⇅
deepseek · Ollama Search
Search for models on Ollama.
  • deepseek-r1

    DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.

    tools thinking 1.5b 7b 8b 14b 32b 70b 671b

    74.2M  Pulls 35  Tags Updated  5 months ago

  • deepseek-v3.1

    DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking cloud 671b

    200.4K  Pulls 8  Tags Updated  2 months ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2.9M  Pulls 5  Tags Updated  11 months ago

  • deepseek-coder

    DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.

    1.3b 6.7b 33b

    2.2M  Pulls 102  Tags Updated  1 year ago

  • deepseek-coder-v2

    An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

    16b 236b

    1.3M  Pulls 64  Tags Updated  1 year ago

  • deepseek-llm

    An advanced language model crafted with 2 trillion bilingual tokens.

    7b 67b

    234.7K  Pulls 64  Tags Updated  2 years ago

  • deepseek-v2

    A strong, economical, and efficient Mixture-of-Experts language model.

    16b 236b

    224.1K  Pulls 34  Tags Updated  1 year ago

  • deepseek-v2.5

    An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.

    236b

    89.5K  Pulls 7  Tags Updated  1 year ago

  • deepseek-ocr

    DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.

    vision 3b

    58.9K  Pulls 3  Tags Updated  3 weeks ago

  • deepseek-v3.2

    DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance.

    cloud

    3,169  Pulls 1  Tag Updated  6 days ago

  • deepscaler

    A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.

    1.5b

    843.7K  Pulls 5  Tags Updated  10 months ago

  • openthinker

    A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.

    7b 32b

    628K  Pulls 15  Tags Updated  8 months ago

  • r1-1776

    A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.

    70b 671b

    153K  Pulls 9  Tags Updated  9 months ago

  • deepseek-140B/DeepSeekAI140B

    5,659  Pulls 1  Tag Updated  10 months ago

  • erwan2/DeepSeek-Janus-Pro-7B

    5.4M  Pulls 1  Tag Updated  10 months ago

  • huihui_ai/deepseek-r1-abliterated

    DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.

    thinking 1.5b 7b 8b 14b 32b 70b

    607K  Pulls 55  Tags Updated  6 months ago

  • ishumilin/deepseek-r1-coder-tools

    This is a modified model that adds support for autonomous coding agents like Cline

    tools 1.5b 7b 8b 14b 32b 70b

    556K  Pulls 6  Tags Updated  9 months ago

  • secfa/DeepSeek-R1-UD-IQ1_S

    Unsloth's DeepSeek-R1 , I just merged the thing and uploaded it here. This is the full 671b model. MoE Bits:1.58bit Type:UD-IQ1_S Disk Size:131GB Accuracy:Fair Details:MoE all 1.56bit. down_proj in MoE mixture of 2.06/1.56bit

    170.9K  Pulls 2  Tags Updated  10 months ago

  • hengwen/DeepSeek-R1-Distill-Qwen-32B

    DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models.

    119.8K  Pulls 2  Tags Updated  10 months ago

  • SIGJNF/deepseek-r1-671b-1.58bit

    Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.

    101.4K  Pulls 1  Tag Updated  10 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.