246 1 year ago

The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-128k`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-128k

Models

View all →

11 models

llama-3-taiwan-8b-instruct-128k:latest

5.7GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q5_k

latest

5.7GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q4_k_s

4.7GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q4_k_m

4.9GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q5_0

5.6GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q5_1

6.1GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q5_k_s

5.6GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q5_k_m

5.7GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q6_k

6.6GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:q8_0

8.5GB · 128K context window · Text · 1 year ago

llama-3-taiwan-8b-instruct-128k:f16

16GB · 128K context window · Text · 1 year ago

Readme

No readme