Ollama
Models Docs Pricing
Sign in Download
Models Download Docs Pricing Sign in
⇅
deepseek1 · Ollama
Search for models on Ollama.
  • deepseek-v3.1

    DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking cloud 671b

    580.7K  Pulls 8  Tags Updated  6 months ago

  • deepseek-r1

    DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.

    tools thinking 1.5b 7b 8b 14b 32b 70b 671b

    82.9M  Pulls 35  Tags Updated  9 months ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    3.8M  Pulls 5  Tags Updated  1 year ago

  • deepscaler

    A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.

    1.5b

    1.2M  Pulls 5  Tags Updated  1 year ago

  • openthinker

    A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.

    7b 32b

    1.1M  Pulls 15  Tags Updated  1 year ago

  • r1-1776

    A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.

    70b 671b

    391.9K  Pulls 9  Tags Updated  1 year ago

  • deepseek-140B/DeepSeekAI140B

    5,740  Pulls 1  Tag Updated  1 year ago

  • llm-lm/deepseek1

    3  Pulls 1  Tag Updated  12 months ago

  • weixuan/pre_train_deepseek1.5B

    deepseek预训练1.5B

    25  Pulls 1  Tag Updated  1 year ago

  • DanyaVoredom/DanyAI-deepseek-coder-1.3b-base

    30  Pulls 1  Tag Updated  3 weeks ago

  • DanyaVoredom/DanyAI-deepseek-coder-1.3b-instruct

    17  Pulls 1  Tag Updated  3 weeks ago

  • DedeProGames/smallcoder

    SmallCoder is a compact reasoning-focused coding model, fine-tuned from DeepSeek-R1 1.5B using a code dataset that includes step-by-step reasoning.

    1.5b

    170  Pulls 1  Tag Updated  2 months ago

  • sparksammy/deepseek-14b-unsloth

    thinking

    98  Pulls 3  Tags Updated  2 months ago

  • lsm03624/deepseek-r1

    DeepSeek-R1-0528 仍然使用 2024 年 12 月所发布的 DeepSeek V3 Base 模型作为基座,但在后训练过程中投入了更多算力,显著提升了模型的思维深度与推理能力。这个8B精馏版本编程能力都爆表!

    thinking

    1,008  Pulls 1  Tag Updated  10 months ago

  • mikepfunk28/deepseekq3_agent

    16k Context Window meaning you need less RAM to run this. It's full context windows is loaded in the deepseekq3_coder. It allocates the RAM needed for the context when loading the model.

    tools thinking

    506  Pulls 1  Tag Updated  8 months ago

  • DedeProGames/smallmath

    SmallCoder is a compact reasoning-focused math model, fine-tuned from DeepSeek-R1 1.5B using a math dataset that includes step-by-step reasoning.

    13  Pulls 1  Tag Updated  1 month ago

  • kongxiangyiren/Neko-Chat

    基于 DeepSeek-R1-Distill-Qwen-1.5B 微调的中文轻量对话模型,自带猫娘口癖与亲昵风格。

    334  Pulls 1  Tag Updated  6 months ago

  • zhihu/zhi-create-dsr1-14b

    Zhi-Create-DSR1-14B is a fine-tuned model based on DeepSeek-R1-Distill-Qwen-14B, specifically optimized for enhanced creative writing capabilities. Several benchmark evaluations indicate the model's improved creative writing performance.

    284  Pulls 2  Tags Updated  10 months ago

  • alsdjfalsdjfs/DeepSeek-R1-0528-IQ1_S

    (168GB) unsloth/DeepSeek-R1-0528-GGUF:IQ1_S

    thinking 671b

    246  Pulls 3  Tags Updated  10 months ago

  • chng/deepseek-R1-1.5B

    tools

    122  Pulls 1  Tag Updated  9 months ago

© 2026 Ollama
Blog Contact