Llama 3.2 Pre-trained with Tamil Wikipedia and Fine tuned with Tamil Alpaca dataset
2 Pulls Updated 13 days ago
Updated 13 days ago
13 days ago
33e73ef45354 · 2.2GB
model
archllama
·
parameters3.61B
·
quantizationQ4_K_M
2.2GB
params
{"stop":["\u003c|start_header_id|\u003e","\u003c|eot_id|\u003e","\u003c|end_header_id|\u003e","\u003
157B
template
Below are some instructions that describe some tasks. Write responses that appropriately complete ea
214B