The "Home" model is a fine tuning of the StableLM-Zephyr-3B model. It achieves a score of 97.11% score for JSON function calling accuracy.

3,789 Pulls Updated 6 months ago

Readme

Home 3B

AI Model Specially trained to control Home Assistant devices.

This new version is based on StableLM-Zephyr-3B, it achieves a score of 97.11% score for JSON function calling accuracy on the test dataset it’s trained

This version supports blind, light, garage_door, media_player, fan, lock, climate, switch, vacuum, todo, input_select and timer entity types.

And now with multi-language support: English, German, Spanish and French

It needs the Llama Conversation Integration to work

Model

The fine tuning dataset is a combination of the Cleaned Stanford Alpaca Dataset as well as a custom synthetic dataset designed to teach the model function calling based on the device information in the context.

The model can be found on HuggingFace:
3B v3 (Based on StableLM-Zephyr-3B): https://huggingface.co/acon96/Home-3B-v3-GGUF

The model is quantized using Llama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Raspberry Pis.

The model can be used as an “instruct” type model using the Zephyr prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services.

Due to the mix of data used during fine tuning, the 3B model is also capable of basic instruct and QA tasks.

Synthetic Dataset

The synthetic dataset is aimed at covering basic day to day operations in home assistant such as turning devices on and off.

Home Assistant Component

In order to integrate with Home Assistant, we provide a custom_component that exposes the locally running LLM as a “conversation agent” that can be interacted with using a chat interface as well as integrate with Speech-to-Text and Text-to-Speech addons to enable interacting with the model by speaking.

Installing with HACS

You can use this button to add the repository to HACS and open the download page

Open your Home Assistant instance and open a repository inside the Home Assistant Community Store.

Installing Manually

  1. Ensure you have either the Samba, SSH, FTP, or another add-on installed that gives you access to the config folder
  2. If there is not already a custom_components folder, create one now.
  3. Copy the custom_components/llama_conversation folder from this repo to config/custom_components/llama_conversation on your Home Assistant machine.
  4. Restart Home Assistant using the “Developer Tools” tab -> Services -> Run homeassistant.restart
  5. The “LLaMA Conversation” integration should show up in the “Devices” section now.

Configuring the component as a Conversation Agent

NOTE: ANY DEVICES THAT YOU SELECT TO BE EXPOSED TO THE MODEL WILL BE ADDED AS CONTEXT AND POTENTIALLY HAVE THEIR STATE CHANGED BY THE MODEL. ONLY EXPOSE DEVICES THAT YOU ARE OK WITH THE MODEL MODIFYING THE STATE OF, EVEN IF IT IS NOT WHAT YOU REQUESTED. THE MODEL MAY OCCASIONALLY HALLUCINATE AND ISSUE COMMANDS TO THE WRONG DEVICE! USE AT YOUR OWN RISK.

In order to utilize the conversation agent in HomeAssistant:
1. Navigate to “Settings” -> “Voice Assistants”

  1. Select “+ Add Assistant”

  2. Name the assistant whatever you want.

  3. Select the “Conversation Agent” that we created previously

  4. If using STT or TTS configure these now

  5. Return to the “Overview” dashboard and select chat icon in the top left.

From here you can submit queries to the AI agent.

In order for any entities be available to the agent, you must “expose” them first.

  1. Navigate to “Settings” -> “Voice Assistants” -> “Expose” Tab

  2. Select “+ Expose Entities” in the bottom right

  3. Check any entities you would like to be exposed to the conversation agent.