latest
4.9GB
llama 3.1 finetuned on sec10q dataset
8B
13 Pulls Updated 4 weeks ago
Updated 4 weeks ago
4 weeks ago
251619e4de0c · 4.9GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>
254B
params
{"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>","<|reserved_special_token"]}
128B
system
Provide a short and precise answer, given the following context:
65B
Readme
No readme