The "Home" model is a fine tuning of the StableLM-Zephyr-3B model. It achieves a score of 97.11% score for JSON function calling accuracy.
307K Pulls 7 Tags Updated 1 year ago
The "Home" model is a fine tuning of the Phi-2 model from Microsoft. The model is able to control devices in the user's smart home as well as perform basic question and answering.
5,107 Pulls 6 Tags Updated 1 year ago
300 Pulls 3 Tags Updated 1 year ago
This is not the ablation version. Kimi-K2-Instruct is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters.
9,325 Pulls 4 Tags Updated 4 months ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
7,125 Pulls 2 Tags Updated 10 months ago
2,823 Pulls 5 Tags Updated 8 months ago
from acon96 / stablehome-multilingual-experimental
95 Pulls 1 Tag Updated 1 year ago
meta-llama/Llama-3.1-8B trained with soniawmeyer/travel-conversations-finetuning
3 Pulls 1 Tag Updated 9 months ago
Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-3B on highly curated 1.5B tokens Malaysian instruction dataset.
182 Pulls 13 Tags Updated 1 year ago
https://huggingface.co/meta-llama/Llama-Guard-3-8B
38 Pulls 1 Tag Updated 1 year ago
Deductive Reasoning Qwen 32B is a reinforcement fine-tune of Qwen 2.5 32B Instruct to solve challenging deduction problems
99 Pulls 7 Tags Updated 9 months ago