4 1 week ago

Qwen-SEA-LION-v4-32B-IT is a multilingual model which has been pretrained and instruct-tuned for the Southeast Asia region. Developed by AI Singapore and funded by National Research Foundation, Singapore.

tools thinking

Models

View all →

Readme

Qwen-SEA-LION-v4-32B-IT

Last update: 2025-10-16

SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.

Qwen-SEA-LION-v4-32B-IT is based on Qwen3, which provides a strong foundation with support for over 100 languages and advanced reasoning capabilities. The model underwent continued pre-training on approximately 100B tokens sampled from the SEA-Pile v2 pretraining corpus of over one trillion tokens across 7 SEA languages: Burmese, Indonesian, Malay, Filipino, Tamil, Thai, and Vietnamese. Finally, it was post-trained on a high-quality dataset of approximately 8 million question-and-answer pairs to create the final instruction-tuned model.

Qwen-SEA-LION-v4-32B-IT inherits the following features from Qwen3-32B:

  • 32,768 of context length natively
  • Qwen3’s reasoning capabilities

SEA-LION stands for Southeast Asian Languages In One Network.

We performed continued pre-training in English and SEA languages on Qwen3-32B, a decoder model using the Qwen 3 architecture, and post-training to create Qwen-SEA-LION-v4-32B-IT.

For tokenization, the model employs the default tokenizer used in Qwen3-32B.

  • Developed by: Products Pillar, AI Singapore
  • Funded by: Singapore NRF
  • Shared by: Products Pillar, AI Singapore
  • Model type: Decoder
  • Context Length: 32k tokens
  • Language(s) (NLP): Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai, and Vietnamese
  • License: Qwen Terms of Service / Qwen Usage Policy
  • Continue pretrained from model: Qwen-3-32B

For details on Qwen-SEA-LION-v4-32B-IT performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/ . For more details, please refer to AI Singapore’s HuggingFace page for this model.