Phi-3 is a family of lightweight 3B (Mini) and 14B (Medium) state-of-the-art open models by Microsoft.
10.2M Pulls 72 Tags Updated 1 year ago
Phi-4 is a 14B parameter, state-of-the-art open model from Microsoft.
4.8M Pulls 5 Tags Updated 8 months ago
Phi-2: a 2.7B language model by Microsoft Research that demonstrates outstanding reasoning and language understanding capabilities.
711.3K Pulls 18 Tags Updated 1 year ago
Phi 4 reasoning and reasoning plus are 14-billion parameter open-weight reasoning models that rival much larger models on complex reasoning tasks.
474.4K Pulls 9 Tags Updated 4 months ago
Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.
393.6K Pulls 5 Tags Updated 6 months ago
A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger sized models.
327.7K Pulls 17 Tags Updated 1 year ago
Code generation model based on Code Llama.
99.4K Pulls 49 Tags Updated 1 year ago
Phi 4 mini reasoning is a lightweight open model that balances efficiency with advanced reasoning ability.
58.1K Pulls 5 Tags Updated 4 months ago
2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research.
381.2K Pulls 15 Tags Updated 1 year ago
A new small LLaVA model fine-tuned from Phi 3 Mini.
123.8K Pulls 4 Tags Updated 1 year ago
A companion assistant trained in philosophy, psychology, and personal relationships. Based on Mistral.
108K Pulls 49 Tags Updated 1 year ago
A 3.8B model fine-tuned on a private high-quality synthetic dataset for information extraction, based on Phi-3.
41.8K Pulls 17 Tags Updated 1 year ago
🪐 A family of small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset.
449.8K Pulls 94 Tags Updated 1 year ago
111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI
56.1K Pulls 5 Tags Updated 6 months ago
A top-performing mixture of experts model, fine-tuned with high-quality data.
35.6K Pulls 18 Tags Updated 1 year ago
model for improving german
220 Pulls 1 Tag Updated 1 year ago
a finetuned llama3 model trained on general philosophy encyclopedia, designed to help you have an enriching philosophical dialogue; over the top further finetuned on reddit conversations to give you a natural feel of a philosophical conversation
161 Pulls 1 Tag Updated 1 year ago
Our shockrates base model, now pretrained on Philosophy encylopedia before being trained on Plato's work
107 Pulls 1 Tag Updated 1 year ago
49 Pulls 1 Tag Updated 10 months ago
A fine-tuned version of Gemma 3-1B that translates sentences between English and Morse code
15 Pulls 1 Tag Updated 5 months ago