q4_k_m quantization only. Now using 8k context size
83 Pulls Updated 13 months ago
Updated 13 months ago
13 months ago
421ffc18355b · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
system
You are a helpful AI assistant.
31B
template
{{- if .First }}{{ .System }} <|end_of_turn>{{- end}}GPT4 Correct User: {{ .Prompt }}<|end_of_turn|>
123B
params
{
"num_ctx": 8192,
"stop": [
"<|end_of_turn|>",
"<|end\\_of\\_turn|>",
190B
Readme
I’ve only uploaded the -q4_k_m quantization
2023.11.06 Updated modelfile with
PARAMETER num_ctx 8192