2,006 1 year ago

The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct

1 year ago

3f0396cc71cc · 5.7GB

llama
·
8.03B
·
Q5_K_M
"{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .P
{ "num_ctx": 8192, "stop": [ "<|start_header_id|>", "<|end_header_id|>",

Readme

The model used is a quantized version of Llama-3 Taiwan 8B Instruct, a specialized model designed for traditional Chinese conversation with 8 billion parameters. Quantization reduces the model’s size and computational requirements while maintaining performance, making it suitable for deployment in resource-constrained environments. More details can be found on the Hugging Face page.