3 Downloads Updated 5 months ago
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama-SEA-LION-v2-8B-IT is a multilingual model which has been fine-tuned with around 100,000 English instruction-completion pairs alongside a smaller pool of around 50,000 instruction-completion pairs from other ASEAN languages, such as Indonesian, Thai and Vietnamese. These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
SEA-LION stands for Southeast Asian Languages In One Network.
For more details, please refer to AI Singapore’s HuggingFace page for this model. The original GGUF files can be obtained from this HuggingFace repository