Qwen2 MOE 57B

295 Pulls Updated 2 weeks ago

Readme

Qwen2-57B-A14B-Instruct
截屏2024-06-09 07.10.44.png

Introduction

截屏2024-06-09 07.11.54.png

Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 57B-A14B Mixture-of-Experts Qwen2 model.

Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.

Qwen2-57B-A14B-Instruct supports a context length of up to 65,536 tokens, enabling the processing of extensive inputs. Please refer to this section for detailed instructions on how to deploy Qwen2 for handling long texts.

For more details, please refer to our blog and GitHub.

Model Details

Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.

Evaluation

We briefly compare Qwen2-57B-A14B-Instruct with similar-sized instruction-tuned LLMs, including Qwen1.5-32B-Chat. The results are shown as follows:

截屏2024-06-09 06.59.57.png

import from https://hf-mirror.com/Qwen/Qwen2-57B-A14B-Instruct

WeChat ID : TAOZHIYUAI