The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.
111.9K Pulls 16 Tags Updated 6 days ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
446.2K Pulls 5 Tags Updated 8 months ago
A general-purpose multimodal mixture-of-experts model for production-grade tasks and enterprise workloads.
4,858 Pulls 1 Tag Updated 2 weeks ago
Mistral Small 3 sets a new benchmark in the “small” Large Language Models category below 70B.
2.2M Pulls 21 Tags Updated 10 months ago
The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.
1.6M Pulls 33 Tags Updated 11 months ago
A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.
1.4M Pulls 119 Tags Updated 2 years ago
An update to Mistral Small that improves on function calling, instruction following, and less repetition errors.
892.4K Pulls 5 Tags Updated 6 months ago
A model from NVIDIA based on Llama 3 that excels at conversational question answering (QA) and retrieval-augmented generation (RAG).
154.9K Pulls 35 Tags Updated 1 year ago
The IBM Granite Embedding 30M and 278M models models are text-only dense biencoder embedding models, with 30M available in English only and 278M serving multilingual use cases.
144K Pulls 6 Tags Updated 1 year ago
The IBM Granite 1B and 3B models are the first mixture of experts (MoE) Granite models from IBM designed for low latency usage.
114.2K Pulls 33 Tags Updated 1 year ago
Moka-AI Massive Mixed Embedding
6,964 Pulls 7 Tags Updated 1 year ago
1,432 Pulls 1 Tag Updated 1 year ago
Embedding
1,161 Pulls 3 Tags Updated 1 year ago
147 Pulls 1 Tag Updated 9 months ago
embedding
91 Pulls 1 Tag Updated 1 year ago
49 Pulls 1 Tag Updated 1 year ago
15 Pulls 1 Tag Updated 1 month ago
BGE-M3 is a new model from BAAI distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
2.9M Pulls 3 Tags Updated 1 year ago
Built upon the powerful LLaMa-3 architecture and fine-tuned on an extensive dataset of health information, this model leverages its vast medical knowledge to offer clear, comprehensive answers.
2,929 Pulls 6 Tags Updated 1 year ago
Finance-Llama-8B is a fine-tuned Llama 3.1 8B model trained on 500k examples for tasks like QA, reasoning, sentiment, and NER. It supports multi-turn dialogue and is ideal for financial assistants.
2,014 Pulls 2 Tags Updated 6 months ago