q4_k_m quantization only. Now using 8k context size
83 Pulls Updated 12 months ago
Updated 12 months ago
12 months ago
421ffc18355b · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
system
You are a helpful AI assistant.
31B
template
{{- if .First }}{{ .System }} <|end_of_turn>{{- end}}GPT4 Correct User: {{ .Prompt }}<|end_of_turn|>
123B
params
{"num_ctx":8192,"stop":["\u003c|end_of_turn|\u003e","\u003c|end\\_of\\_turn|\u003e","\u003c|end_of\\
190B
Readme
I’ve only uploaded the -q4_k_m quantization
2023.11.06 Updated modelfile with
PARAMETER num_ctx 8192