Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
cwchang
/
llama-3-taiwan-70b-instruct
:q4_k_m
254
Downloads
Updated
1 year ago
The model used is a quantized version of `Llama-3-Taiwan-70B-Instruct`. More details can be found on the website (https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
The model used is a quantized version of `Llama-3-Taiwan-70B-Instruct`. More details can be found on the website (https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
Cancel
Updated 1 year ago
1 year ago
b6e1700e2be8 · 43GB ·
model
arch
llama
·
parameters
70.6B
·
quantization
Q4_K_M
43GB
params
{ "num_ctx": 8192, "stop": [ "<|start_header_id|>", "<|end_header_id|>",
171B
template
"{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .P
257B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)