This is a experimental 4x8B Llama 3 MoE
106 Pulls Updated 7 months ago
Updated 7 months ago
7 months ago
e17aa8db9e34 · 15GB
model
archllama
·
parameters24.9B
·
quantizationQ4_K_M
15GB
params
{
"stop": [
"num_keep 24",
"<|start_header_id|>",
"<|end_header_id|>",
110B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
254B
Readme
Llama-3-Peach-Instruct-4x8B-MoE
This is a experimental MoE created using Mergekit from
- meta-llama/Meta-Llama-3-8B-Instruct
- Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R
- NousResearch/Hermes-2-Theta-Llama-3-8B
- rombodawg/Llama-3-8B-Instruct-Coder
Evaluation:
Q4_K_M:
- GSM8K (5-shot): 0.6983 ± 0.0126
- GSM8K (8-shot, cot): 0.674 ± 0.0129
Mergekit yaml file:
base_model: Meta-Llama-3-8B-Instruct
experts:
- source_model: Meta-Llama-3-8B-Instruct
positive_prompts:
- "explain"
- "chat"
- "assistant"
- "think"
- "roleplay"
- "versatile"
- "helpful"
- "factual"
- "integrated"
- "adaptive"
- "comprehensive"
- "balanced"
negative_prompts:
- "specialized"
- "narrow"
- "focused"
- "limited"
- "specific"
- source_model: Llama-3-8B-Instruct-Coder
positive_prompts:
- "python"
- "math"
- "solve"
- "code"
- "programming"
- "javascript"
- "algorithm"
- "factual"
negative_prompts:
- "sorry"
- "cannot"
- "concise"
- "imaginative"
- "creative"
- source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
positive_prompts:
- "AI"
- "instructive"
- "chat"
- "assistant"
- "clear"
- "directive"
- "helpful"
- "informative"
- source_model: Hermes-2-Theta-Llama-3-8B
positive_prompts:
- "chat"
- "assistant"
- "analytical"
- "accurate"
- "code"
- "logical"
- "knowledgeable"
- "precise"
- "calculate"
- "compute"
- "solve"
- "work"
- "python"
- "javascript"
- "programming"
- "algorithm"
- "tell me"
- "assistant"
- "factual"
negative_prompts:
- "abstract"
- "artistic"
- "emotional"
- "mistake"
- "inaccurate"
gate_mode: hidden
dtype: float16
Some inspiration for the Mergekit yaml file is from LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2.