222 1 year ago

The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-128k`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-128k

1 year ago

262b9380c164 · 4.9GB

llama
·
8.03B
·
Q4_K_M
"{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .P
{ "num_ctx": 131072, "stop": [ "<|start_header_id|>", "<|end_header_id|>",

Readme

No readme