latest
3.2GB
Fine Tuning Llama3-8B-Instruct on private dataset
8B
13 Pulls Updated 3 weeks ago
Updated 3 weeks ago
3 weeks ago
e2fbcd4265ed · 3.2GB
model
archllama
·
parameters8.03B
·
quantizationQ2_K
3.2GB
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>
254B
params
{"repeat_penalty":1.3,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"],"temperature":0.6,"top_k":30,"top_p":0.8}
158B
Readme
Trying different models for lighter use