1,293 Downloads Updated 3 months ago
Introduction We introduce EXAONE 4.0, which integrates a Non-reasoning mode and Reasoning mode to achieve both the excellent usability of EXAONE 3.5 and the advanced reasoning abilities of EXAONE Deep. To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended to support Spanish in addition to English and Korean.
The EXAONE 4.0 model series consists of two sizes: a mid-size 32B model optimized for high performance, and a small-size 1.2B model designed for on-device applications.
In the EXAONE 4.0 architecture, we apply new architectural changes compared to previous EXAONE models as below:
Hybrid Attention: For the 32B model, we adopt hybrid attention scheme, which combines Local attention (sliding window attention) with Global attention (full attention) in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding. QK-Reorder-Norm: We reorder the LayerNorm position from the traditional Pre-LN scheme by applying LayerNorm directly to the attention and MLP outputs, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation. For more details, please refer to our technical report, HuggingFace paper, blog, and GitHub.
Model Configuration Number of Parameters (without embeddings): 30.95B Number of Layers: 64 Number of Attention Heads: GQA with 40-heads and 8-KV heads Vocab Size: 102,400 Context Length: 131,072 tokens