Outperforms Llama-3.1-8B-Instruct and Hermes-3-Llama-3.1-8B
472 Pulls Updated 2 months ago
Updated 2 months ago
2 months ago
dffb40aeff41 · 4.9GB
Readme
Source: https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF
Authors: Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha
We present the Llama-3.1-Storm-8B model that outperforms Meta AI’s Llama-3.1-8B-Instruct and Hermes-3-Llama-3.1-8B models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
- Self-Curation: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).
- Targeted fine-tuning: We performed Spectrum-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
- Model Merging: We merged our fine-tuned model with the Llama-Spark model using SLERP method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. Llama-3.1-Storm-8B improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
🏆 Introducing Llama-3.1-Storm-8B
Llama-3.1-Storm-8B builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
As shown in the left subplot of the above figure, Llama-3.1-Storm-8B model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following (IFEval), Knowledge-driven QA benchmarks (GPQA, MMLU-Pro), Reasoning (ARC-C, MuSR, BBH), Reduced Hallucinations (TruthfulQA), and Function-Calling (BFCL). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
We also benchmarked our model with the recently published model Hermes-3-Llama-3.1-8B built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
Llama-3.1-Storm-8B Model Strengths
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore Llama-3.1-Storm-8B and look forward to seeing how it will be utilized in various projects and applications.
Model Strength | Relevant Benchmarks |
🎯 Improved Instruction Following | IFEval Strict (+3.93%) |
🌐 Enhanced Knowledge Driven Question Answering | GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%) |
🧠 Better Reasoning | ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%) |
🤖 Superior Agentic Capabilities | BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%) |
🚫 Reduced Hallucinations | TruthfulQA (+9%) |
Note: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.