Models
GitHub
Discord
Turbo
Sign in
Download
Models
Download
GitHub
Discord
Sign in
cwchang
/
llama-3-taiwan-8b-instruct-128k
:q5_1
222
Downloads
Updated
1 year ago
The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-128k`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-128k
The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-128k`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-128k
Cancel
Updated 1 year ago
1 year ago
841918de1163 · 6.1GB
model
arch
llama
·
parameters
8.03B
·
quantization
Q5_1
6.1GB
params
{ "num_ctx": 131072, "stop": [ "<|start_header_id|>", "<|end_header_id|>",
145B
template
"{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .P
257B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)