Mistral Medium 3.5 is the first flagship model of Mistral AI that merged instruction-following, reasoning, and coding in a single set of 128B weights.
8,772 Pulls 5 Tags Updated 5 days ago
25.1K Pulls 46 Tags Updated 4 days ago
NVIDIA Nemotron 3 Nano Omni is a multimodal large language model that unifies video, audio, image, and text understanding to support enterprise-grade Q&A, summarization, transcription, and document intelligence workflows.
353.1K Pulls 4 Tags Updated 6 days ago
DeepSeek-V4-Flash is a preview of the DeepSeek-V4 series, a Mixture-of-Experts model with 284B total parameters and 13B activated, built for efficient reasoning across a 1M-token context window.
41.8K Pulls 1 Tag Updated 1 week ago
DeepSeek-V4-Pro is a frontier Mixture-of-Experts model with a 1M-token context window and three reasoning modes.
32.5K Pulls 1 Tag Updated 1 week ago
Laguna XS.2 is a 33B total parameter Mixture-of-Experts model with 3B activated parameters per token designed for agentic coding and long-horizon work on a local machine.
5,253 Pulls 7 Tags Updated 6 days ago
Kimi K2.6 is an open-source, native multimodal agentic model that advances practical capabilities in long-horizon coding, coding-driven design, proactive autonomous execution, and swarm-based task orchestration.
79.8K Pulls 1 Tag Updated 1 week ago
Qwen3.6 delivers substantial upgrades in agentic coding and thinking preservation than previous Qwen models.
795K Pulls 22 Tags Updated 1 week ago
MedGemma 1.5 4B is an updated version of the MedGemma 4B model.
11.5K Pulls 5 Tags Updated 2 weeks ago
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension.
19.8K Pulls 9 Tags Updated 2 weeks ago
GLM-5.1 is our next-generation flagship model for agentic engineering, with significantly stronger coding capabilities than its predecessor. It achieves state-of-the-art performance on SWE-Bench Pro and leads GLM-5 by a wide margin.
129.4K Pulls 1 Tag Updated 3 weeks ago
An open 30B MoE model from NVIDIA with 3B activated parameters that delivers strong reasoning and agentic capabilities.
109.8K Pulls 3 Tags Updated 1 month ago
MiniMax's M2-series model for coding, agentic workflows, and professional productivity.
107.3K Pulls 1 Tag Updated 1 month ago
Gemma 4 models are designed to deliver frontier-level performance at each size. They are well-suited for reasoning, agentic workflows, coding, and multimodal understanding.
6.7M Pulls 29 Tags Updated 2 weeks ago
LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
1.1M Pulls 6 Tags Updated 2 months ago
NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.
271K Pulls 7 Tags Updated 1 month ago
Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.
8.1M Pulls 58 Tags Updated 1 month ago
A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.
200.5K Pulls 1 Tag Updated 2 months ago
MiniMax-M2.5 is a state-of-the-art large language model designed for real-world productivity and coding tasks.
171.3K Pulls 1 Tag Updated 2 months ago
Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.
1.1M Pulls 4 Tags Updated 2 months ago