Ollama
Models GitHub Discord Docs Cloud
Sign in Download
Models Download GitHub Discord Docs Cloud Sign in
⇅
olla · Ollama Search
Search for models on Ollama.
  • olmo-3

    Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.

    7b 32b

    5,832  Pulls 15  Tags Updated  3 days ago

  • orca2

    Orca 2 is built by Microsoft research, and are a fine-tuned version of Meta's Llama 2 models. The model is designed to excel particularly in reasoning.

    7b 13b

    102.5K  Pulls 33  Tags Updated  2 years ago

  • olmo-3.1

    Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.

    tools 32b

    5,186  Pulls 10  Tags Updated  3 days ago

  • opencoder

    OpenCoder is an open and reproducible code LLM family which includes 1.5B and 8B models, supporting chat in English and Chinese languages.

    1.5b 8b

    272K  Pulls 9  Tags Updated  1 year ago

  • tinyllama

    The TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.

    1.1b

    3.2M  Pulls 36  Tags Updated  1 year ago

  • llama3.3

    New state of the art 70B model. Llama 3.3 70B offers similar performance compared to the Llama 3.1 405B model.

    tools 70b

    2.8M  Pulls 14  Tags Updated  1 year ago

  • meditron

    Open-source medical large language model adapted from Llama 2 to the medical domain.

    7b 70b

    118K  Pulls 22  Tags Updated  2 years ago

  • firefunction-v2

    An open weights function calling model based on Llama 3, competitive with GPT-4o function calling capabilities.

    tools 70b

    59.8K  Pulls 17  Tags Updated  1 year ago

  • command-r7b-arabic

    A new state-of-the-art version of the lightweight Command R7B model that excels in advanced Arabic language capabilities for enterprises in the Middle East and Northern Africa.

    tools 7b

    42.4K  Pulls 5  Tags Updated  9 months ago

  • mistral-nemo

    A state-of-the-art 12B model with 128k context length, built by Mistral AI in collaboration with NVIDIA.

    tools 12b

    3.1M  Pulls 17  Tags Updated  4 months ago

  • ollam/unichat-llama3-chinese-8b

    https://github.com/UnicomAI/Unichat-llama3-Chinese

    7,860  Pulls 2  Tags Updated  1 year ago

  • ollama_2/test

    ZENCIA

    tools

    332  Pulls 1  Tag Updated  8 months ago

  • ollamaced/gemma3_27b_pml_multiPDF_q4k_m

    This is a 8 bit quantized version of a gemma3:27b fine tuned with YagCed/Aveva_PML_test on HF. If you're from Aveva and want this model to be removed from public view, please let me know.

    79  Pulls 1  Tag Updated  7 months ago

  • ollamaced/gemma3_1b_spiders

    A very small test: gemma3:1b fine tuned with a dataset obsessed with spiders. As a result, this model puts spiders in all its answers. This is useless: just a pet project to learn to generate a dataset and fine tune a small model.

    51  Pulls 1  Tag Updated  7 months ago

  • ollamay/w3e

    embedding

    50  Pulls 1  Tag Updated  1 year ago

  • ollamaced/gemma3_27b_PML_test01

    Testing fine tuning gemma3:27b with a single pdf about PML.

    22  Pulls 1  Tag Updated  7 months ago

  • ollamaced/gemma3_1b_scale_relativity

    A very small test: gemma3:1b fine tuned with a dataset created with gemma3:27b from an article by Laurent Nottale on Scale Relativity. This is pretty much useless, i did it only to learn to generate a dataset and fine tune a small model.

    17  Pulls 1  Tag Updated  7 months ago

  • ollamaced/holo1_7B_f16

    tools

    12  Pulls 1  Tag Updated  6 months ago

  • ollama-publish-lover/e-d-2.4b-q4km

    9  Pulls 1  Tag Updated  9 months ago

  • ollama_2/mom

    tools

    8  Pulls 1  Tag Updated  8 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.