The model used is a quantized version of `Llama-3-Taiwan-70B-Instruct`. More details can be found on the website (https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
110 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
433841a0b8d4 · 58GB
model
archllama
·
parameters58.4B
·
quantizationQ6_K
48GB
model
·
parameters12.2B
·
quantizationF32
10.0GB
params
{"num_ctx":8192,"stop":["\u003c|start_header_id|\u003e","\u003c|end_header_id|\u003e","\u003c|end_of
171B
template
"{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .
257B
Readme
No readme