latest
14GB
Model from Taiwan-Llama gguf
13B
59 Pulls Updated 8 months ago
Updated 9 months ago
9 months ago
c1a6510cd7f5 · 14GB
model
archllama
·
parameters13.0B
·
quantizationQ8_0
14GB
Readme
tw-llama2
The models are originally from https://huggingface.co/audreyt/Taiwan-LLaMa-v1.0-GGUF/tree/main
Versions
latest : https://huggingface.co/audreyt/Taiwan-LLaMa-v1.0-GGUF/blob/main/Taiwan-LLaMa-13b-1.0.Q8_0.gguf
Run the model
ollama run markliou/tw-llama2