Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
Updated 2 months ago
No models have been pushed.
Readme
Source:
https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat
Qwen1.5-MoE-A2.7B-Chat
Introduction
Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
For more details, please refer to our blog post and GitHub repo.
Model Details
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, Qwen1.5-MoE-A2.7B
is upcycled from Qwen-1.8B
. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to Qwen1.5-7B
, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of Qwen1.5-7B
.
Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.