86 1 year ago

The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-DPO`. More details can be found on the website (https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-DPO)

Models

View all →

15 models

llama-3-taiwan-8b-instruct-dpo:q2_k

3.2GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q3_k_s

3.7GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q3_k_m

4.0GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q3_k_l

4.3GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q4_0

4.7GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q4_1

5.1GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q4_k_s

4.7GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q4_k_m

4.9GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q5_0

5.6GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q5_1

6.1GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q5_k_s

5.6GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q5_k_m

5.7GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q6_k

6.6GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:q8_0

8.5GB · 8K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-dpo:f16

16GB · 8K context window · Text · 1 year ago

Readme

No readme