Fixed num_ctx to 8192. This Dolphin 2.9 Llama 3 8B model is ready to use for full model's 8k contexts window.
112 Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
1de0f93b8099 · 4.7GB
model
archllama
·
parameters8.03B
·
quantizationQ4_0
4.7GB
system
You are Dolphin, an uncensored and unbiased AI assistant.
59B
params
{"num_ctx":8192,"stop":["\u003c|im_start|\u003e","\u003c|im_end|\u003e"]}
74B
template
<|im_start|>system
{{ .System }}
<|im_start|>user
{{ .Prompt }}
<|im_start|>assistant
87B
Readme
Dolphin-2.9-Llama-3-8B
Model Quants | Size | Bit | Perplexity |
---|---|---|---|
dolphin-llama3-8b:Q4_0 | 4.7GB | 4 | +0.2166 ppl |
dolphin-llama3-8b:Q4_K_M | 4.9GB | 4 | +0.0532 ppl |
dolphin-llama3-8b:Q5_K_M | 5.7GB | 5 | +0.0122 ppl |
dolphin-llama3-8b:Q6_K | 6.6GB | 6 | +0.0008 ppl |
Config
max_position_embeddings : 128258
rope_theta : 500000.0
vocab_size : 8192
Remarks
- ‘latest’ model points to Q4_0
- modelfile has 8192 num_ctx activated (Ollama default only 2048)