Kimi K2.5 is an open-source, native multimodal agentic model that seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
280.7K Pulls 1 Tag Updated 3 months ago
DeepSeek-OCR is a vision-language model that can perform token-efficient OCR.
442.9K Pulls 3 Tags Updated 5 months ago
The most powerful vision-language model in the Qwen model family to date.
3.7M Pulls 59 Tags Updated 6 months ago
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
14M Pulls 98 Tags Updated 2 years ago
Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
4.5M Pulls 9 Tags Updated 11 months ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
5.2M Pulls 17 Tags Updated 1 year ago
Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.
1.9M Pulls 17 Tags Updated 11 months ago
A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.
909K Pulls 5 Tags Updated 1 year ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
739.8K Pulls 5 Tags Updated 1 year ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
1.2M Pulls 18 Tags Updated 2 years ago
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension.
26.9K Pulls 9 Tags Updated 3 weeks ago
22 Pulls 1 Tag Updated 8 months ago
A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-0106.
1.1M Pulls 50 Tags Updated 2 years ago
Fully uncensored local AI for coding, automation, vision tasks, and direct final answers, built to reduce unnecessary thinking output and deliver complete responses.
276 Pulls 1 Tag Updated 6 days ago
Qwen3.6-35B-A3B uncensored by HauhauCS. 0/465 Refusals. Patched to have vision support; Fully functional, 100% of what the original authors intended - just without the refusals. These are meant to be the best lossless uncensored models out there.
18.9K Pulls 5 Tags Updated 3 weeks ago
llmfan46/gemma-4-26B-A4B-it-uncensored-heretic - quantized to q4_K_M from HF with vision capability retained
2,085 Pulls 1 Tag Updated 3 weeks ago
Deutsche Vision-OCR auf Basis von Qwen3.5. Kompakt, lokal, Open Source. Aus deutschem Rechnungs-/Brief-/Formular-Bild → strikt validiertes JSON. 100 % JSON-Validität, 0 % Halluzination auf 200+ echten DE-Rechnungen (anonymisiert).
591 Pulls 1 Tag Updated 3 weeks ago
Fine-tuned Qwen3.5-9B with distilled reasoning and full vision support. 883 tensors (427 text + 441 vision + 15 MTP) — vision tower preserved byte-for-byte from base via llama-export-lora merge.
397 Pulls 1 Tag Updated 4 weeks ago
Deutsche Vision-OCR. Engineered + optimiert. Lokal. Open Source. Aus deutschem
128 Pulls 1 Tag Updated 2 weeks ago
This is a Highly Specialized Vision Model With More Then 2B Parameters.
147.1K Pulls 1 Tag Updated 5 months ago