The "Home" model is a fine tuning of the StableLM-Zephyr-3B model. It achieves a score of 97.11% score for JSON function calling accuracy.
148.9K Pulls 7 Tags Updated 1 year ago
The "Home" model is a fine tuning of the Phi-2 model from Microsoft. The model is able to control devices in the user's smart home as well as perform basic question and answering.
4,214 Pulls 6 Tags Updated 1 year ago
282 Pulls 3 Tags Updated 1 year ago
This is not the ablation version. Kimi-K2-Instruct is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters.
1,602 Pulls 4 Tags Updated 1 month ago
from acon96 / stablehome-multilingual-experimental
91 Pulls 1 Tag Updated 1 year ago
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
7,092 Pulls 2 Tags Updated 7 months ago
1,921 Pulls 5 Tags Updated 5 months ago
meta-llama/Llama-3.1-8B trained with soniawmeyer/travel-conversations-finetuning
3 Pulls 1 Tag Updated 5 months ago
Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-3B on highly curated 1.5B tokens Malaysian instruction dataset.
141 Pulls 13 Tags Updated 9 months ago
https://huggingface.co/meta-llama/Llama-Guard-3-8B
28 Pulls 1 Tag Updated 1 year ago
Deductive Reasoning Qwen 32B is a reinforcement fine-tune of Qwen 2.5 32B Instruct to solve challenging deduction problems
96 Pulls 7 Tags Updated 5 months ago