The model used is a quantized version of `Llama-3-Taiwan-8B-Instruct-128k`. More details can be found on the https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct-128k

8B

61 Pulls Updated 2 months ago

Readme

No readme