LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
972K Pulls 6 Tags Updated 3 weeks ago
NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.
67.7K Pulls 7 Tags Updated 1 week ago
Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.
2.5M Pulls 30 Tags Updated 2 weeks ago
Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.
878.4K Pulls 4 Tags Updated 1 month ago
LFM2.5 is a new family of hybrid models designed for on-device deployment.
997.6K Pulls 5 Tags Updated 1 month ago
As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
740.6K Pulls 4 Tags Updated 1 month ago
The most powerful vision-language model in the Qwen model family to date.
2.3M Pulls 59 Tags Updated 4 months ago
24B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.
666.2K Pulls 6 Tags Updated 3 months ago
The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.
673.8K Pulls 16 Tags Updated 3 months ago
Granite 4 features improved instruction following (IF) and tool-calling capabilities, making them more effective in enterprise applications.
887.3K Pulls 17 Tags Updated 4 months ago
The first installment in the Qwen3-Next series with strong performance in terms of both parameter efficiency and inference speed.
411.3K Pulls 10 Tags Updated 3 months ago
Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models.
376.9K Pulls 6 Tags Updated 3 months ago
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture.
141.9K Pulls 3 Tags Updated 1 month ago
Nemotron-3-Nano is a new Standard for Efficient, Open, and Intelligent Agentic Models, now updated with a 4B parameter count model.
251.1K Pulls 9 Tags Updated 3 days ago
Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.
158.3K Pulls 10 Tags Updated 3 months ago
123B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.
132.1K Pulls 6 Tags Updated 3 months ago
FunctionGemma is a specialized version of Google's Gemma 3 270M model fine-tuned explicitly for function calling.
101.6K Pulls 4 Tags Updated 3 months ago
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are safety reasoning models built-upon gpt-oss
97.7K Pulls 3 Tags Updated 4 months ago
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
24.5M Pulls 58 Tags Updated 5 months ago
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
7.9M Pulls 5 Tags Updated 5 months ago