Ollama
Models GitHub Discord Docs Cloud
Sign in Download
Models Download GitHub Discord Docs Cloud Sign in
⇅
bert · Ollama Search
Search for models on Ollama.
  • bespoke-minicheck

    A state-of-the-art fact-checking model developed by Bespoke Labs.

    7b

    67.7K  Pulls 17  Tags Updated  1 year ago

  • dolphin-mixtral

    Uncensored, 8x7b and 8x22b fine-tuned models based on the Mixtral mixture of experts models that excels at coding tasks. Created by Eric Hartford.

    8x7b 8x22b

    775.1K  Pulls 70  Tags Updated  12 months ago

  • phi3.5

    A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger sized models.

    3.8b

    382.7K  Pulls 17  Tags Updated  1 year ago

  • granite3.1-moe

    The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.

    tools 1b 3b

    1.6M  Pulls 33  Tags Updated  11 months ago

  • granite3-moe

    The IBM Granite 1B and 3B models are the first mixture of experts (MoE) Granite models from IBM designed for low latency usage.

    tools 1b 3b

    114.2K  Pulls 33  Tags Updated  1 year ago

  • V4lentin1879/jina-bert-code-f16

    embedding

    59  Pulls 1  Tag Updated  8 months ago

  • nezahatkorkmaz/turkish-bert-embedding

    embedding

    8  Pulls 1  Tag Updated  2 months ago

  • phi4-mini

    Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.

    tools 3.8b

    650.5K  Pulls 5  Tags Updated  9 months ago

  • mistral-openorca

    Mistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.

    7b

    211.4K  Pulls 17  Tags Updated  2 years ago

  • tinydolphin

    An experimental 1.1B parameter model trained on the new Dolphin 2.8 dataset by Eric Hartford and based on TinyLlama.

    1.1b

    194.6K  Pulls 18  Tags Updated  1 year ago

  • orca2

    Orca 2 is built by Microsoft research, and are a fine-tuned version of Meta's Llama 2 models. The model is designed to excel particularly in reasoning.

    7b 13b

    102.5K  Pulls 33  Tags Updated  2 years ago

  • falcon2

    Falcon2 is an 11B parameters causal decoder-only model built by TII and trained over 5T tokens.

    11b

    73.4K  Pulls 17  Tags Updated  1 year ago

  • kimi-k2-thinking

    Kimi K2 Thinking, Moonshot AI's best open-source thinking model.

    cloud

    14.3K  Pulls 1  Tag Updated  1 month ago

  • bettercalljason/krishnamurti-mistral

    Fine-tuned mistral -7b model that engages in dialogue using J. Krishnamurti's unique teaching style. Responds to philosophical and psychological inquiries with his direct, uncompromising approach to understanding human consciousness, truth, and freedom.

    33  Pulls 1  Tag Updated  10 months ago

  • benmxrt/bolt

    tools

    15  Pulls 1  Tag Updated  11 months ago

  • blazewild/trek-nepal

    Trek Nepal is a specialized assistant fine-tuned on top of llama3.2-3b-instruct, focused exclusively on providing expert-level support for trekking in Nepal. It has been trained with ~3.8k high-quality examples derived from guidebooks, PDFs, and web-scrap

    tools 3b

    5  Pulls 1  Tag Updated  5 months ago

  • bharathreddyjanumpally/cloud-policy-as-code-assistant

    Cloud Policy-as-Code Assistant converts compliance rules into multi-cloud OPA/Rego policies (AWS/GCP/Azure) targeting Terraform plan JSON, including unit tests, exceptions template, and rationale—as strict JSON output.

    tools

    1  Tag Updated  2 hours ago

  • phi4

    Phi-4 is a 14B parameter, state-of-the-art open model from Microsoft.

    14b

    6.6M  Pulls 5  Tags Updated  11 months ago

  • dolphin3

    Dolphin 3.0 Llama 3.1 8B 🐬 is the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

    8b

    3.5M  Pulls 5  Tags Updated  11 months ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    3M  Pulls 5  Tags Updated  11 months ago

© 2025 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Us
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
© 2025 Ollama Inc.