24B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.
30.2K Pulls 6 Tags Updated yesterday
The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.
93.5K Pulls 16 Tags Updated 2 days ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
58.8K Pulls 3 Tags Updated 3 weeks ago
The most powerful vision-language model in the Qwen model family to date.
739.6K Pulls 59 Tags Updated 1 month ago
An update to Mistral Small that improves on function calling, instruction following, and less repetition errors.
881K Pulls 5 Tags Updated 5 months ago
Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.
1.1M Pulls 17 Tags Updated 6 months ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
437.2K Pulls 5 Tags Updated 8 months ago
Meta's latest collection of multimodal models.
887.1K Pulls 11 Tags Updated 6 months ago
The current, most capable model that runs on a single GPU.
28.2M Pulls 29 Tags Updated 1 week ago
A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.
548.1K Pulls 5 Tags Updated 9 months ago
Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
3.3M Pulls 9 Tags Updated 6 months ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
4.1M Pulls 17 Tags Updated 1 year ago
A new small LLaVA model fine-tuned from Phi 3 Mini.
152.5K Pulls 4 Tags Updated 1 year ago
A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
2.1M Pulls 4 Tags Updated 1 year ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
450.1K Pulls 18 Tags Updated 1 year ago
BakLLaVA is a multimodal model consisting of the Mistral 7B base model augmented with the LLaVA architecture.
170.7K Pulls 17 Tags Updated 2 years ago
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
11.9M Pulls 98 Tags Updated 1 year ago