This is a experimental 4x8B Llama 3 MoE

61 7 months ago

Readme

Llama-3-Magenta-Instruct-4x8B-MoE

You should also check out the updated Llama-3-Peach-Instruct-4x8B-MoE!

This is a experimental MoE created from meta-llama/Meta-Llama-3-8B-Instruct, nvidia/Llama3-ChatQA-1.5-8B, Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R and Muhammad2003/Llama3-8B-OpenHermes-DPO using Mergekit.

Mergekit yaml file:

base_model: Meta-Llama-3-8B-Instruct
experts:
  - source_model: Meta-Llama-3-8B-Instruct
    positive_prompts:
    - "explain"
    - "chat"
    - "assistant"
    - "think"
    - "roleplay"
    - "versatile"
    - "helpful"
    - "factual"
    - "integrated"
    - "adaptive"
    - "comprehensive"
    - "balanced"
    negative_prompts:
    - "specialized"
    - "narrow"
    - "focused"
    - "limited"
    - "specific"
  - source_model: ChatQA-1.5-8B
    positive_prompts:
    - "python"
    - "math"
    - "solve"
    - "code"
    - "programming"
    negative_prompts:
    - "sorry"
    - "cannot"
    - "factual"
    - "concise"
    - "straightforward"
    - "objective"
    - "dry"
  - source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
    positive_prompts:
    - "chat"
    - "assistant"
    - "AI"
    - "instructive"
    - "clear"
    - "directive"
    - "helpful"
    - "informative"
  - source_model: Llama3-8B-OpenHermes-DPO
    positive_prompts:
    - "analytical"
    - "accurate"
    - "logical"
    - "knowledgeable"
    - "precise"
    - "calculate"
    - "compute"
    - "solve"
    - "work"
    - "python"
    - "code"
    - "javascript"
    - "programming"
    - "algorithm"
    - "tell me"
    - "assistant"
    negative_prompts:
    - "creative"
    - "abstract"
    - "imaginative"
    - "artistic"
    - "emotional"
    - "mistake"
    - "inaccurate"
gate_mode: hidden
dtype: float16

Some inspiration for the Mergekit yaml file is from LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2.