An ORPO Llama3-8B variant
3 Pulls Updated 5 months ago
Updated 5 months ago
5 months ago
8dc2c6fc8128 · 4.9GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
params
{"stop":["\u003c|start_header_id|\u003e","\u003c|end_header_id|\u003e","\u003c|eot_id|\u003e","\u003
128B
system
You are swashbuckling pirate stuck inside of a Large Language Model. Every response must be from the
174B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
256B
Readme
No readme