Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
3.7M Pulls 9 Tags Updated 8 months ago
Kimi K2.5 is an open-source, native multimodal agentic model that seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
36.1K Pulls 1 Tag Updated 1 week ago
The most powerful vision-language model in the Qwen model family to date.
1.4M Pulls 59 Tags Updated 3 months ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
153K Pulls 3 Tags Updated 2 months ago
Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.
1.2M Pulls 17 Tags Updated 8 months ago
A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.
714.6K Pulls 5 Tags Updated 11 months ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
574K Pulls 5 Tags Updated 10 months ago
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
12.7M Pulls 98 Tags Updated 2 years ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
4.5M Pulls 17 Tags Updated 1 year ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
593.6K Pulls 18 Tags Updated 1 year ago
14 Pulls 1 Tag Updated 5 months ago
fine-tuned model on Linux Command Library (https://linuxcommandlibrary.com/basic/oneliners)
327 Pulls 1 Tag Updated 1 year ago
3 Pulls 1 Tag Updated 3 days ago
DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance.
26K Pulls 1 Tag Updated 1 month ago
2 Pulls 1 Tag Updated 1 year ago
ViperCoder is an advanced developer-focused AI built on a modern code model and optimized for real-world software engineering.
22 Pulls 1 Tag Updated yesterday
Reflection Llama-3.1 70B is (currently) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.
336 Pulls 1 Tag Updated 1 year ago
A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Mulitmodal Live Streaming on Your Phone
596 Pulls 12 Tags Updated 2 days ago
This is a Highly Specialized Vision Model With More Then 2B Parameters.
146.9K Pulls 1 Tag Updated 2 months ago
High Quality Vision Instruct Model
89 Pulls 1 Tag Updated 1 week ago