๐ŸŒ‹ LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.

Vision 7B 13B 34B

293.8K Pulls Updated 4 months ago

98 Tags

ed11eda7790d ยท 30B
{ "stop": [ "[INST]", "[/INST]" ] }