Phi-3 is a family of lightweight 3B (Mini) and 14B (Medium) state-of-the-art open models by Microsoft.
15.2M Pulls 72 Tags Updated 1 year ago
Phi-4 is a 14B parameter, state-of-the-art open model from Microsoft.
6.6M Pulls 5 Tags Updated 11 months ago
Phi 4 reasoning and reasoning plus are 14-billion parameter open-weight reasoning models that rival much larger models on complex reasoning tasks.
862.6K Pulls 9 Tags Updated 7 months ago
Phi-2: a 2.7B language model by Microsoft Research that demonstrates outstanding reasoning and language understanding capabilities.
804.7K Pulls 18 Tags Updated 1 year ago
Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.
649.6K Pulls 5 Tags Updated 9 months ago
A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger sized models.
382.5K Pulls 17 Tags Updated 1 year ago
Code generation model based on Code Llama.
124.9K Pulls 49 Tags Updated 1 year ago
Phi 4 mini reasoning is a lightweight open model that balances efficiency with advanced reasoning ability.
98.7K Pulls 5 Tags Updated 7 months ago
2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research.
852.3K Pulls 15 Tags Updated 1 year ago
A new small LLaVA model fine-tuned from Phi 3 Mini.
155.3K Pulls 4 Tags Updated 1 year ago
A companion assistant trained in philosophy, psychology, and personal relationships. Based on Mistral.
138.4K Pulls 49 Tags Updated 2 years ago
A 3.8B model fine-tuned on a private high-quality synthetic dataset for information extraction, based on Phi-3.
67.3K Pulls 17 Tags Updated 1 year ago
🪐 A family of small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset.
573.8K Pulls 94 Tags Updated 1 year ago
111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI
83.5K Pulls 5 Tags Updated 9 months ago
A top-performing mixture of experts model, fine-tuned with high-quality data.
57.9K Pulls 18 Tags Updated 1 year ago
model for improving german
247 Pulls 1 Tag Updated 1 year ago
a finetuned llama3 model trained on general philosophy encyclopedia, designed to help you have an enriching philosophical dialogue; over the top further finetuned on reddit conversations to give you a natural feel of a philosophical conversation
190 Pulls 1 Tag Updated 1 year ago
Our shockrates base model, now pretrained on Philosophy encylopedia before being trained on Plato's work
112 Pulls 1 Tag Updated 1 year ago
55 Pulls 1 Tag Updated 1 year ago
A fine-tuned version of Gemma 3-1B that translates sentences between English and Morse code
26 Pulls 1 Tag Updated 7 months ago