🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
2.7M Pulls 98 Tags Updated 11 months ago
Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
929.9K Pulls 9 Tags Updated 2 months ago
A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
249.9K Pulls 4 Tags Updated 8 months ago
BakLLaVA is a multimodal model consisting of the Mistral 7B base model augmented with the LLaVA architecture.
104K Pulls 17 Tags Updated 13 months ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
96.3K Pulls 18 Tags Updated 8 months ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
78.4K Pulls 17 Tags Updated 2 months ago
A new small LLaVA model fine-tuned from Phi 3 Mini.
69.1K Pulls 4 Tags Updated 8 months ago