The most powerful vision-language model in the Qwen model family to date.
796.9K Pulls 59 Tags Updated 1 month ago
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
11.9M Pulls 98 Tags Updated 1 year ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
4.1M Pulls 17 Tags Updated 1 year ago
Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
3.3M Pulls 9 Tags Updated 7 months ago
Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.
1.1M Pulls 17 Tags Updated 7 months ago
A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.
560.9K Pulls 5 Tags Updated 9 months ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
454.7K Pulls 18 Tags Updated 1 year ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
446.2K Pulls 5 Tags Updated 8 months ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
65.3K Pulls 3 Tags Updated 4 weeks ago
12 Pulls 1 Tag Updated 3 months ago
A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-0106.
231.6K Pulls 50 Tags Updated 1 year ago
This is a Highly Specialized Vision Model With More Then 2B Parameters.
146.6K Pulls 1 Tag Updated 1 week ago
From huihui-ai/Llama-3.2-11B-Vision-Instruct-abliterated
71K Pulls 2 Tags Updated 10 months ago
The most powerful vision-language model in the Qwen3 model family to date.
22.8K Pulls 54 Tags Updated 1 month ago
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone
19.6K Pulls 13 Tags Updated 6 months ago
PaliGemma is a versatile and lightweight vision-language model based on open components such as the SigLIP vision model and the Gemma language model.
5,585 Pulls 1 Tag Updated 1 year ago
A lightweight vision model
5,574 Pulls 1 Tag Updated 1 year ago
5,059 Pulls 5 Tags Updated 9 months ago
Vision Encoder for Janus Pro 7B. This model is under testing
4,635 Pulls 1 Tag Updated 10 months ago
GPH Vision LLM: Transforming Industries through Intelligent Solutions
3,787 Pulls 1 Tag Updated 1 year ago