Ollama
Models Docs Pricing
Sign in Download
Models Download Docs Pricing Sign in
⇅
m3e · Ollama
Search for models on Ollama.
  • ministral-3

    The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.

    vision tools cloud 3b 8b 14b

    1.1M  Pulls 16  Tags Updated  4 months ago

  • mistral-large-3

    A general-purpose multimodal mixture-of-experts model for production-grade tasks and enterprise workloads.

    vision tools cloud

    47.6K  Pulls 1  Tag Updated  5 months ago

  • mistral-small3.1

    Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.

    vision tools 24b

    736.9K  Pulls 5  Tags Updated  1 year ago

  • mistral-medium-3.5

    Mistral Medium 3.5 is the first flagship model of Mistral AI that merged instruction-following, reasoning, and coding in a single set of 128B weights.

    vision tools thinking 128b

    11.8K  Pulls 5  Tags Updated  17 hours ago

  • medgemma

    MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension.

    vision 4b 27b

    21.2K  Pulls 9  Tags Updated  2 weeks ago

  • glm-4.7-flash

    As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

    tools thinking

    1.2M  Pulls 4  Tags Updated  3 months ago

  • mistral-small3.2

    An update to Mistral Small that improves on function calling, instruction following, and less repetition errors.

    vision tools 24b

    1.9M  Pulls 5  Tags Updated  10 months ago

  • granite3.1-moe

    The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.

    tools 1b 3b

    2.9M  Pulls 33  Tags Updated  1 year ago

  • mistral-small

    Mistral Small 3 sets a new benchmark in the “small” Large Language Models category below 70B.

    tools 22b 24b

    3M  Pulls 21  Tags Updated  1 year ago

  • orca-mini

    A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.

    3b 7b 13b 70b

    2.9M  Pulls 119  Tags Updated  2 years ago

  • granite3-moe

    The IBM Granite 1B and 3B models are the first mixture of experts (MoE) Granite models from IBM designed for low latency usage.

    tools 1b 3b

    892.3K  Pulls 33  Tags Updated  1 year ago

  • llama3-chatqa

    A model from NVIDIA based on Llama 3 that excels at conversational question answering (QA) and retrieval-augmented generation (RAG).

    8b 70b

    951.6K  Pulls 35  Tags Updated  1 year ago

  • granite-embedding

    The IBM Granite Embedding 30M and 278M models models are text-only dense biencoder embedding models, with 30M available in English only and 278M serving multilingual use cases.

    embedding 30m 278m

    323K  Pulls 6  Tags Updated  1 year ago

  • lyyyt/m3e-forensic-finetuned

    embedding

    38  Pulls 2  Tags Updated  2 months ago

  • milkey/m3e

    Moka-AI Massive Mixed Embedding

    embedding

    7,246  Pulls 7  Tags Updated  2 years ago

  • twwch/m3e-base

    embedding

    1,536  Pulls 1  Tag Updated  1 year ago

  • yxl/m3e

    Embedding

    embedding

    1,207  Pulls 3  Tags Updated  2 years ago

  • turingdance/m3e-base

    embedding

    183  Pulls 1  Tag Updated  1 year ago

  • davisgao/m3e

    embedding

    embedding

    91  Pulls 1  Tag Updated  2 years ago

  • zailiang/m3e

    embedding

    49  Pulls 1  Tag Updated  1 year ago

© 2026 Ollama
Blog Contact