Ollama
Models GitHub Discord Docs Cloud
Sign in Download
Models Download GitHub Discord Docs Cloud Sign in
⇅
octo · Ollama Search
Search for models on Ollama.
  • deepseek-ocr

    DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.

    vision 3b

    60.3K  Pulls 3  Tags Updated  3 weeks ago

  • tinyllama

    The TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.

    1.1b

    3.2M  Pulls 36  Tags Updated  1 year ago

  • llama3-groq-tool-use

    A series of models from Groq that represent a significant advancement in open-source AI capabilities for tool use/function calling.

    tools 8b 70b

    123.6K  Pulls 33  Tags Updated  1 year ago

  • olmo2

    OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

    7b 13b

    3.4M  Pulls 9  Tags Updated  11 months ago

  • ih0dl/octocore.multimodal.v0.1

    vision

    1,597  Pulls 1  Tag Updated  1 year ago

  • neonshark/octopus-v2-q4_k_m

    pulled from https://huggingface.co/second-state/Octopus-v2-GGUF/blob/main/Octopus-v2-Q4_K_M.gguf

    158  Pulls 1  Tag Updated  1 year ago

  • mapler/octopus-v2-f16

    50  Pulls 1  Tag Updated  1 year ago

  • ih0dl/octopus

    A coding assistant with a 32k context window.

    27  Pulls 4  Tags Updated  1 year ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    2.9M  Pulls 5  Tags Updated  11 months ago

  • mistral-openorca

    Mistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.

    7b

    209.6K  Pulls 17  Tags Updated  2 years ago

  • org/qwen2.5-1m

    The long-context version of Qwen2.5, supporting 1M-token context lengths

    7b 14b

    1,325  Pulls 2  Tags Updated  9 months ago

  • Omoeba/qwen3-coder-128k

    tools support and a 128k context length by default

    tools 30b

    1,178  Pulls 1  Tag Updated  2 months ago

  • okamototk/deepcoder

    DeepCoder with tool calling(MCP) support

    tools 14b

    212  Pulls 1  Tag Updated  7 months ago

  • oybekdevuz/command-r

    [This is a fixed version of Command R which REALLY support tool calls] Command R is a Large Language Model optimized for conversational interaction and long context tasks.

    tools

    118  Pulls 1  Tag Updated  8 months ago

  • oybekdevuz/command-a

    [This is a fixed version of Command A which REALLY support tool calls] 111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI

    tools

    34  Pulls 1  Tag Updated  8 months ago

  • openllm/erosumika

    https://huggingface.co/localfultonextractor/Erosumika-7B-GGUF

    1,125  Pulls 4  Tags Updated  1 year ago

  • nuibang/Cline_FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview

    Adapted for Cline tool / Roo Code use in VS Code fused model , hybrid of DeepSeekR1 and Qwen2.5 coder, from FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview.

    tools

    4,301  Pulls 2  Tags Updated  10 months ago

  • scb10x/typhoon-ocr-7b

    Typhoon-OCR - A document parsing model built for Thai and English

    vision

    2,205  Pulls 1  Tag Updated  6 months ago

  • zac/phi4-tools

    Phi-4 is a 14B parameter, state-of-the-art open model from Microsoft. Clone of `phi4` with a tool calling template.

    tools

    1,479  Pulls 1  Tag Updated  10 months ago

  • scb10x/typhoon-ocr-3b

    Typhoon-OCR - A document parsing model built for Thai and English

    vision

    1,473  Pulls 1  Tag Updated  5 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.