The "Home" model is a fine tuning of the StableLM-Zephyr-3B model. It achieves a score of 97.11% score for JSON function calling accuracy.
4,689 Pulls Updated 9 months ago
Updated 9 months ago
9 months ago
2e12529e612b · 1.4GB
Readme
Home 3B
AI Model Specially trained to control Home Assistant devices.
This new version is based on StableLM-Zephyr-3B, it achieves a score of 97.11% score for JSON function calling accuracy on the test dataset it’s trained
This version supports blind, light, garage_door, media_player, fan, lock, climate, switch, vacuum, todo, input_select and timer entity types.
And now with multi-language support: English, German, Spanish and French
It needs the Llama Conversation Integration to work
Model
The fine tuning dataset is a combination of the Cleaned Stanford Alpaca Dataset as well as a custom synthetic dataset designed to teach the model function calling based on the device information in the context.
The model can be found on HuggingFace:
3B v3 (Based on StableLM-Zephyr-3B): https://huggingface.co/acon96/Home-3B-v3-GGUF
The model is quantized using Llama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Raspberry Pis.
The model can be used as an “instruct” type model using the Zephyr prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services.
Due to the mix of data used during fine tuning, the 3B model is also capable of basic instruct and QA tasks.
Synthetic Dataset
The synthetic dataset is aimed at covering basic day to day operations in home assistant such as turning devices on and off.
Home Assistant Component
In order to integrate with Home Assistant, we provide a custom_component
that exposes the locally running LLM as a “conversation agent” that can be interacted with using a chat interface as well as integrate with Speech-to-Text and Text-to-Speech addons to enable interacting with the model by speaking.
Installing with HACS
You can use this button to add the repository to HACS and open the download page
Installing Manually
- Ensure you have either the Samba, SSH, FTP, or another add-on installed that gives you access to the
config
folder - If there is not already a
custom_components
folder, create one now. - Copy the
custom_components/llama_conversation
folder from this repo toconfig/custom_components/llama_conversation
on your Home Assistant machine. - Restart Home Assistant using the “Developer Tools” tab -> Services -> Run
homeassistant.restart
- The “LLaMA Conversation” integration should show up in the “Devices” section now.
Configuring the component as a Conversation Agent
NOTE: ANY DEVICES THAT YOU SELECT TO BE EXPOSED TO THE MODEL WILL BE ADDED AS CONTEXT AND POTENTIALLY HAVE THEIR STATE CHANGED BY THE MODEL. ONLY EXPOSE DEVICES THAT YOU ARE OK WITH THE MODEL MODIFYING THE STATE OF, EVEN IF IT IS NOT WHAT YOU REQUESTED. THE MODEL MAY OCCASIONALLY HALLUCINATE AND ISSUE COMMANDS TO THE WRONG DEVICE! USE AT YOUR OWN RISK.
In order to utilize the conversation agent in HomeAssistant: 1. Navigate to “Settings” -> “Voice Assistants”
Select “+ Add Assistant”
Name the assistant whatever you want.
Select the “Conversation Agent” that we created previously
If using STT or TTS configure these now
Return to the “Overview” dashboard and select chat icon in the top left.
From here you can submit queries to the AI agent.
In order for any entities be available to the agent, you must “expose” them first.
Navigate to “Settings” -> “Voice Assistants” -> “Expose” Tab
Select “+ Expose Entities” in the bottom right
Check any entities you would like to be exposed to the conversation agent.