Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
36.9K Pulls 5 Tags Updated 11 days ago
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
5M Pulls 98 Tags Updated 14 months ago
Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
1.8M Pulls 9 Tags Updated 5 months ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
1.1M Pulls 17 Tags Updated 5 months ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
178.9K Pulls 18 Tags Updated 11 months ago
A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.
43.1K Pulls 5 Tags Updated 7 weeks ago
A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-0106.
152.4K Pulls 50 Tags Updated 15 months ago
From huihui-ai/Llama-3.2-11B-Vision-Instruct-abliterated
63.5K Pulls 2 Tags Updated 2 months ago
17.1K Pulls 6 Tags Updated 5 months ago
A lightweight vision model
5,022 Pulls 1 Tag Updated 11 months ago
PaliGemma is a versatile and lightweight vision-language model based on open components such as the SigLIP vision model and the Gemma language model.
4,684 Pulls 1 Tag Updated 7 months ago
Vision Encoder for Janus Pro 7B. This model is under testing
3,639 Pulls 1 Tag Updated 2 months ago
GPH Vision LLM: Transforming Industries through Intelligent Solutions
3,355 Pulls 1 Tag Updated 11 months ago
Lightweight and fast vision model, does a decent job describing photos.
2,377 Pulls 2 Tags Updated 12 months ago
Lightweight and fast vision model, does a decent job captioning photos.
2,215 Pulls 1 Tag Updated 13 months ago
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone MiniCPM-o is the latest series of end-side multimodal LLMs (MLLMs) ungraded from MiniCPM-V. From OpenBMB/MiniCPM-o-2_6-gguf
2,209 Pulls 1 Tag Updated 3 months ago
Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
1,004 Pulls 1 Tag Updated 4 weeks ago
llava-NousResearch_Nous-Hermes-2-Vision-GGUF_Q4_0 with function calling
996 Pulls 1 Tag Updated 10 months ago
Qwen2.5VL-7B-Instruct-Q5_K_M is a vision-language model from Alibaba Cloud with 7 billion parameters, designed for processing text and visual inputs, and optimized with Q5_K_M quantization for efficient deployment in ollama.
881 Pulls 1 Tag Updated 3 weeks ago