678 Downloads Updated 1 month ago
Updated 1 month ago
1 month ago
fa6d1415a672 · 18GB ·
Qwen3-Coder is available in multiple sizes. Today, we’re excited to introduce Qwen3-Coder-30B-A3B-Instruct. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:
Qwen3-Coder-30B-A3B-Instruct has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: 262,144 natively.
NOTE: This model supports only non-thinking mode and does not generate <think></think>
blocks in its output. Meanwhile, specifying enable_thinking=False
is no longer required.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
To achieve optimal performance, we recommend the following settings:
1. Sampling Parameters:
- We suggest using temperature=0.7
, top_p=0.8
, top_k=20
, repetition_penalty=1.05
.
2. Adequate Output Length: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.
If you find our work helpful, feel free to give us a cite.
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}