NousResearch/Hermes-2-Pro-Llama-3-8B
88.3K Pulls Updated 8 months ago
Updated 8 months ago
8 months ago
182fb5b60d82 · 16GB
model
archllama
·
parameters8.03B
·
quantizationF16
16GB
license
apache-2.0
10B
params
{
"stop": [
"<|im_start|>",
"<|im_end|>"
]
}
59B
template
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
155B
Readme
⚠️ You probably want to use the newer/better adrienbrault/nous-hermes2theta-llama3-8b
Uploaded with https://github.com/adrienbrault/hf-gguf-to-ollama
Ollama models of NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF.