273 1 year ago

Model finetuned starting with llama 3.1 8b using a full precision LoRA he20, rank 64, alpha 16

tools
{
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>"
]
}