LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
936.5K Pulls 6 Tags Updated 2 weeks ago
Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.
1.3M Pulls 30 Tags Updated 1 week ago
Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.
758.9K Pulls 4 Tags Updated 1 month ago
LFM2.5 is a new family of hybrid models designed for on-device deployment.
955.9K Pulls 5 Tags Updated 1 month ago
The most powerful vision-language model in the Qwen model family to date.
2M Pulls 59 Tags Updated 4 months ago
As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
419K Pulls 4 Tags Updated 1 month ago
The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.
573.2K Pulls 16 Tags Updated 2 months ago
24B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.
496.5K Pulls 6 Tags Updated 2 months ago
Granite 4 features improved instruction following (IF) and tool-calling capabilities, making them more effective in enterprise applications.
790.4K Pulls 17 Tags Updated 4 months ago
Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models.
351.2K Pulls 6 Tags Updated 2 months ago
The first installment in the Qwen3-Next series with strong performance in terms of both parameter efficiency and inference speed.
369.2K Pulls 10 Tags Updated 3 months ago
Nemotron 3 Nano - A new Standard for Efficient, Open, and Intelligent Agentic Models
205.1K Pulls 6 Tags Updated 2 months ago
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture.
74.6K Pulls 3 Tags Updated 1 month ago
Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.
120.1K Pulls 10 Tags Updated 2 months ago
123B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.
107K Pulls 6 Tags Updated 2 months ago
FunctionGemma is a specialized version of Google's Gemma 3 270M model fine-tuned explicitly for function calling.
84.4K Pulls 4 Tags Updated 2 months ago
NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.
126 Pulls 7 Tags Updated an hour ago
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are safety reasoning models built-upon gpt-oss
84.2K Pulls 3 Tags Updated 4 months ago
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
23.4M Pulls 58 Tags Updated 5 months ago
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
7.6M Pulls 5 Tags Updated 5 months ago