Phi 4 reasoning and reasoning plus are 14-billion parameter open-weight reasoning models that rival much larger models on complex reasoning tasks.
1.4M Pulls 9 Tags Updated 11 months ago
Phi 4 mini reasoning is a lightweight open model that balances efficiency with advanced reasoning ability.
228.5K Pulls 5 Tags Updated 11 months ago
Phi-3 is a family of lightweight 3B (Mini) and 14B (Medium) state-of-the-art open models by Microsoft.
17M Pulls 72 Tags Updated 1 year ago
Phi-4 is a 14B parameter, state-of-the-art open model from Microsoft.
7.4M Pulls 5 Tags Updated 1 year ago
Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.
1M Pulls 5 Tags Updated 1 year ago
Phi-2: a 2.7B language model by Microsoft Research that demonstrates outstanding reasoning and language understanding capabilities.
1.3M Pulls 18 Tags Updated 2 years ago
A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger sized models.
800.8K Pulls 17 Tags Updated 1 year ago
Code generation model based on Code Llama.
739.5K Pulls 49 Tags Updated 2 years ago
2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research.
1.4M Pulls 15 Tags Updated 2 years ago
A companion assistant trained in philosophy, psychology, and personal relationships. Based on Mistral.
758K Pulls 49 Tags Updated 2 years ago
A 3.8B model fine-tuned on a private high-quality synthetic dataset for information extraction, based on Phi-3.
416.1K Pulls 17 Tags Updated 1 year ago
A new small LLaVA model fine-tuned from Phi 3 Mini.
252.8K Pulls 4 Tags Updated 1 year ago
🪐 A family of small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset.
1.5M Pulls 94 Tags Updated 1 year ago
111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI
189.9K Pulls 5 Tags Updated 1 year ago
A top-performing mixture of experts model, fine-tuned with high-quality data.
420K Pulls 18 Tags Updated 2 years ago
4 Pulls 1 Tag Updated 1 month ago
A fine-tuned version of Gemma 3-1B that translates sentences between English and Morse code
33 Pulls 1 Tag Updated 11 months ago
model for improving german
257 Pulls 1 Tag Updated 1 year ago
a finetuned llama3 model trained on general philosophy encyclopedia, designed to help you have an enriching philosophical dialogue; over the top further finetuned on reddit conversations to give you a natural feel of a philosophical conversation
231 Pulls 1 Tag Updated 1 year ago
Our shockrates base model, now pretrained on Philosophy encylopedia before being trained on Plato's work
116 Pulls 1 Tag Updated 1 year ago