latest
1.8GB
4-bit quants of https://huggingface.co/smangrul/starcoder-3b-hugcoder-loftq
3B
17 Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
e8c71e83e771 · 1.8GB
model
archstarcoder2
·
parameters3.03B
·
quantizationQ4_K_M
1.8GB
template
{{ .Prompt }}
13B
params
{"temperature":0.2}
20B
Readme
No readme