41 Downloads Updated 5 months ago
Name
10 models
Llama-SEA-LION-v3.5-70B-R:latest
43GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q2_k
26GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q3_k_m
34GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q4_0
40GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q4_k_m
43GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q5_0
49GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q5_k_m
50GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q6_k
58GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:q8_0
43GB · 128K context window · Text · 5 months ago
Llama-SEA-LION-v3.5-70B-R:f16
141GB · 128K context window · Text · 5 months ago
SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama-SEA-LION-v3.5-70B-R is a hybrid model offering versatile functionality, handling both complex reasoning tasks and general text generation, with mode selection managed through the tokenizer’s chat template.
We performed further instruction tuning in English and also in SEA languages such as Filipino, Indonesian, Tamil, Thai and Vietnamese on our Instruction Tuned Llama-SEA-LION-v3-70B-IT, a decoder model using the Llama 3.1 architecture, to create Llama-SEA-LION-v3.5-70B-R.
For tokenisation, the model employs the default tokenizer used in Llama 3.1 70B Instruct. The model has a context length of 128k.
SEA-LION stands for Southeast Asian Languages In One Network.
For more details, please refer to AI Singapore’s HuggingFace page for this model. The original GGUF files can be obtained from this HuggingFace repository