Llama 3.2 Pre-trained with Tamil Wikipedia and Fine tuned with Tamil Alpaca dataset
11 Pulls Updated 7 weeks ago
Updated 7 weeks ago
7 weeks ago
bbd78e22e64c · 3.8GB
model
archllama
·
parameters3.61B
·
quantizationQ8_0
3.8GB
params
{
"stop": [
"<|start_header_id|>",
"<|eot_id|>",
"<|end_header_id|>",
157B
template
Below are some instructions that describe some tasks. Write responses that appropriately complete ea
214B