Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
jefferyb
/
granite
:7b-lab-Q4_K_M
58
Downloads
Updated
1 year ago
4-bit quantized version of instructlab/granite-7b-lab https://huggingface.co/instructlab/granite-7b-lab-GGUF
4-bit quantized version of instructlab/granite-7b-lab https://huggingface.co/instructlab/granite-7b-lab-GGUF
Cancel
Updated 1 year ago
1 year ago
b9efffa662cf · 4.1GB ·
model
arch
llama
·
parameters
6.74B
·
quantization
Q4_K_M
4.1GB
params
{ "num_ctx": 2048, "stop": [ "<|im_end|>", "<|endoftext|>", "</|im_e
99B
template
{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user
182B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)