The model used is a quantized version of `Llama-3-Taiwan-70B-Instruct`. More details can be found on the website (https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
189 Pulls Updated 9 months ago
Updated 9 months ago
9 months ago
c05742599319 · 50GB
model
archllama
·
parameters70.6B
·
quantizationQ5_K_M
50GB
params
{
"num_ctx": 8192,
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
171B
template
"{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .
257B
Readme
No readme