๐ŸŒ‹ LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.

7B 13B 34B

234.4K Pulls Updated 3 months ago

98 Tags

7215dae26124 ยท 33B
{ "stop": [ "USER:", "ASSSISTANT:" ] }