150 10 months ago

Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-3B on highly curated 1.5B tokens Malaysian instruction dataset.

Models

View all →

13 models

malaysian-llama-3.2-3b-instruct:q3_k_s

1.7GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q3_k_m

1.9GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q3_k_l

2.0GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q4_0

2.1GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q4_1

2.3GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q4_k_s

2.1GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q4_k_m

2.2GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q5_0

2.5GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q5_1

2.7GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q5_k_s

2.5GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q5_k_m

2.6GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q6_k

3.0GB · 128K context window · Text · 10 months ago

malaysian-llama-3.2-3b-instruct:q8_0

3.8GB · 128K context window · Text · 10 months ago

Readme

This model was converted to GGUF format from mesolitica/malaysian-Llama-3.2-3B-Instruct using llama.cpp. Refer to the original model card for more details on the model.