This is a experimental 4x8B Llama 3 MoE
61 Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
ef317cd6a172 · 15GB
model
archllama
·
parameters24.9B
·
quantizationQ4_K_M
15GB
params
{"stop":["num_keep 24","\u003c|start_header_id|\u003e","\u003c|end_header_id|\u003e","\u003c|eot_id|
110B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
254B
Readme
Llama-3-Magenta-Instruct-4x8B-MoE
You should also check out the updated Llama-3-Peach-Instruct-4x8B-MoE!
This is a experimental MoE created from meta-llama/Meta-Llama-3-8B-Instruct, nvidia/Llama3-ChatQA-1.5-8B, Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R and Muhammad2003/Llama3-8B-OpenHermes-DPO using Mergekit.
Mergekit yaml file:
base_model: Meta-Llama-3-8B-Instruct
experts:
- source_model: Meta-Llama-3-8B-Instruct
positive_prompts:
- "explain"
- "chat"
- "assistant"
- "think"
- "roleplay"
- "versatile"
- "helpful"
- "factual"
- "integrated"
- "adaptive"
- "comprehensive"
- "balanced"
negative_prompts:
- "specialized"
- "narrow"
- "focused"
- "limited"
- "specific"
- source_model: ChatQA-1.5-8B
positive_prompts:
- "python"
- "math"
- "solve"
- "code"
- "programming"
negative_prompts:
- "sorry"
- "cannot"
- "factual"
- "concise"
- "straightforward"
- "objective"
- "dry"
- source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
positive_prompts:
- "chat"
- "assistant"
- "AI"
- "instructive"
- "clear"
- "directive"
- "helpful"
- "informative"
- source_model: Llama3-8B-OpenHermes-DPO
positive_prompts:
- "analytical"
- "accurate"
- "logical"
- "knowledgeable"
- "precise"
- "calculate"
- "compute"
- "solve"
- "work"
- "python"
- "code"
- "javascript"
- "programming"
- "algorithm"
- "tell me"
- "assistant"
negative_prompts:
- "creative"
- "abstract"
- "imaginative"
- "artistic"
- "emotional"
- "mistake"
- "inaccurate"
gate_mode: hidden
dtype: float16
Some inspiration for the Mergekit yaml file is from LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2.