222 1 year ago

The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-128k`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-128k

1 year ago

231a5e37a6a0 · 4.7GB

llama
·
8.03B
·
Q4_K_S
{ "num_ctx": 131072, "stop": [ "<|start_header_id|>", "<|end_header_id|>",
"{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .P

Readme

No readme