This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B-Instruct.
212 Pulls Updated 5 months ago
Updated 5 months ago
5 months ago
115b5d9c8596 · 4.7GB
model
archllama
·
parameters8.03B
·
quantizationIQ4_NL
4.7GB
params
{
"num_keep": 24,
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
110B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
254B
license
META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
“Agree
12kB
Readme
Llama-3-Smaug-8B
Quantizations with i-matrix groups_merged.txt
, saftensors converted to fp32
Built with Meta Llama 3
This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B-Instruct.
Model Description
- Developed by: Abacus.AI
- License: https://llama.meta.com/llama3/license/
- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct.
Evaluation
MT-Bench
########## First turn ##########
score
model turn
Llama-3-Smaug-8B 1 8.77500
Meta-Llama-3-8B-Instruct 1 8.31250
########## Second turn ##########
score
model turn
Meta-Llama-3-8B-Instruct 2 7.8875
Llama-3-Smaug-8B 2 7.8875
########## Average ##########
score
model
Llama-3-Smaug-8B 8.331250
Meta-Llama-3-8B-Instruct 8.10
Model | First turn | Second Turn | Average |
---|---|---|---|
Llama-3-Smaug-8B | 8.78 | 7.89 | 8.33 |
Llama-3-8B-Instruct | 8.31 | 7.89 | 8.10 |
This version of Smaug uses new techniques and new data compared to Smaug-72B, and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.