Ollama
Models Docs Pricing
Sign in Download
Models Download Docs Pricing Sign in
⇅
octo · Ollama
Search for models on Ollama.
  • deepseek-ocr

    DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.

    vision 3b

    176.3K  Pulls 3  Tags Updated  2 months ago

  • tinyllama

    The TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.

    1.1b

    3.6M  Pulls 36  Tags Updated  2 years ago

  • llama3-groq-tool-use

    A series of models from Groq that represent a significant advancement in open-source AI capabilities for tool use/function calling.

    tools 8b 70b

    352.5K  Pulls 33  Tags Updated  1 year ago

  • olmo2

    OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

    7b 13b

    3.5M  Pulls 9  Tags Updated  1 year ago

  • ih0dl/octocore.multimodal.v0.1

    vision

    1,597  Pulls 1  Tag Updated  1 year ago

  • neonshark/octopus-v2-q4_k_m

    pulled from https://huggingface.co/second-state/Octopus-v2-GGUF/blob/main/Octopus-v2-Q4_K_M.gguf

    161  Pulls 1  Tag Updated  1 year ago

  • rjmalagon/octoMed-jsl

    vision

    1  Pull 1  Tag Updated  1 month ago

  • mapler/octopus-v2-f16

    50  Pulls 1  Tag Updated  1 year ago

  • ih0dl/octopus

    A coding assistant with a 32k context window.

    27  Pulls 4  Tags Updated  1 year ago

  • glm-ocr

    GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture.

    vision tools

    33.8K  Pulls 3  Tags Updated  1 week ago

  • Omoeba/qwen3-coder-128k

    tools support and a 128k context length by default

    tools 30b

    1,230  Pulls 1  Tag Updated  4 months ago

  • org/qwen2.5-1m

    The long-context version of Qwen2.5, supporting 1M-token context lengths

    7b 14b

    1,925  Pulls 2  Tags Updated  12 months ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    3.5M  Pulls 5  Tags Updated  1 year ago

  • okamototk/deepcoder

    DeepCoder with tool calling(MCP) support

    tools 14b

    239  Pulls 1  Tag Updated  9 months ago

  • oybekdevuz/command-r

    [This is a fixed version of Command R which REALLY support tool calls] Command R is a Large Language Model optimized for conversational interaction and long context tasks.

    tools

    123  Pulls 1  Tag Updated  11 months ago

  • mistral-openorca

    Mistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.

    7b

    338.9K  Pulls 17  Tags Updated  2 years ago

  • oybekdevuz/command-a

    [This is a fixed version of Command A which REALLY support tool calls] 111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI

    tools

    35  Pulls 1  Tag Updated  11 months ago

  • openllm/erosumika

    https://huggingface.co/localfultonextractor/Erosumika-7B-GGUF

    1,165  Pulls 4  Tags Updated  1 year ago

  • voytas26/openclaw-qwen3vl-8b-opt

    Optimized 8B (qwen3-vl:8b) for OpenClaw agents. Precise JSON tool calls, <thinking> reasoning, temp 0.5, 16k ctx. Runs smoothly on 8GB VRAM laptops with minimal hallucinations.

    vision tools thinking

    1,548  Pulls 1  Tag Updated  2 weeks ago

  • scb10x/typhoon-ocr1.5-3b

    Typhoon-OCR 1.5 - A document parsing model built for Thai and English

    vision

    1,174  Pulls 1  Tag Updated  3 months ago

© 2026 Ollama
Blog Contact