Updated 10 months ago
10 months ago
eb93f82dfbbf · 6.4GB ·
The LLAMUsic is a finetuned version of Llama 3.2 instruction-tuned generative models in 3B size (text in/text out).
Model Developers: Marco Onorato, Riccardo Preite, Niccolò Monaco
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported.
Llama 3.2 Model Family: Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Dec 20, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: MIT License, please use this with conscience.
Feedback: You can contact info.llamusic@gmail.com
Intended Use Cases: Llama 3.2 is intended for personal and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
Out of Scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.