llama3 but with a maxed out context size
8B
53 Pulls Updated 3 months ago
Updated 3 months ago
3 months ago
25a111443813 · 4.7GB
model
archllama
·
parameters8.03B
·
quantizationQ4_0
4.7GB
params
{"num_ctx":8192}
17B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>
254B
Readme
No readme