NousResearch/Hermes-2-Pro-Llama-3-8B
8B
87.7K Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
d7585cb2598f · 4.9GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
params
{"stop":["<|im_start|>","<|im_end|>"]}
59B
template
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
155B
license
apache-2.0
10B
Readme
⚠️ You probably want to use the newer/better adrienbrault/nous-hermes2theta-llama3-8b
Uploaded with https://github.com/adrienbrault/hf-gguf-to-ollama
Ollama models of NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF.