The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.
1.1M Pulls 16 Tags Updated 4 months ago
A general-purpose multimodal mixture-of-experts model for production-grade tasks and enterprise workloads.
47.6K Pulls 1 Tag Updated 5 months ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
736.9K Pulls 5 Tags Updated 1 year ago
Mistral Medium 3.5 is the first flagship model of Mistral AI that merged instruction-following, reasoning, and coding in a single set of 128B weights.
11.8K Pulls 5 Tags Updated 17 hours ago
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension.
21.2K Pulls 9 Tags Updated 2 weeks ago
As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
1.2M Pulls 4 Tags Updated 3 months ago
An update to Mistral Small that improves on function calling, instruction following, and less repetition errors.
1.9M Pulls 5 Tags Updated 10 months ago
The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.
2.9M Pulls 33 Tags Updated 1 year ago
Mistral Small 3 sets a new benchmark in the “small” Large Language Models category below 70B.
3M Pulls 21 Tags Updated 1 year ago
A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.
2.9M Pulls 119 Tags Updated 2 years ago
The IBM Granite 1B and 3B models are the first mixture of experts (MoE) Granite models from IBM designed for low latency usage.
892.3K Pulls 33 Tags Updated 1 year ago
A model from NVIDIA based on Llama 3 that excels at conversational question answering (QA) and retrieval-augmented generation (RAG).
951.6K Pulls 35 Tags Updated 1 year ago
The IBM Granite Embedding 30M and 278M models models are text-only dense biencoder embedding models, with 30M available in English only and 278M serving multilingual use cases.
323K Pulls 6 Tags Updated 1 year ago
38 Pulls 2 Tags Updated 2 months ago
Moka-AI Massive Mixed Embedding
7,246 Pulls 7 Tags Updated 2 years ago
1,536 Pulls 1 Tag Updated 1 year ago
Embedding
1,207 Pulls 3 Tags Updated 2 years ago
183 Pulls 1 Tag Updated 1 year ago
embedding
91 Pulls 1 Tag Updated 2 years ago
49 Pulls 1 Tag Updated 1 year ago