113 4 months ago

Llama-SEA-LION-v3-8B-IT is a multilingual model which has been pretrained and instruct-tuned for the Southeast Asia region. Developed by AI Singapore and funded by National Research Foundation, Singapore.

tools

Models

View all →

Readme

Llama-SEA-LION-v3-8B-IT

SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.

We performed instruction tuning in English and also in SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese on our continued pre-trained Llama-SEA-LION-v3-8B, a decoder model using the Llama 3.1 architecture, to create Llama-SEA-LION-v3-8B-IT.

For tokenisation, the model employs the default tokenizer used in Llama 3.1 8B Instruct. The model has a context length of 128k.

SEA-LION stands for Southeast Asian Languages In One Network.

  • Developed by: Products Pillar, AI Singapore
  • Funded by: Singapore NRF
  • Model type: Decoder
  • Languages supported: Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese
  • License: Llama 3.1 Community License

For more details, please refer to AI Singapore’s HuggingFace page for this model. The original GGUF files can be obtained from this HuggingFace repository