Models
GitHub
Discord
Turbo
Sign in
Download
Models
Download
GitHub
Discord
Sign in
sroecker
/
granite-code
:8b-instruct-128k-q8_0
12
Downloads
Updated
1 year ago
Cancel
Updated 1 year ago
1 year ago
3daca62af646 · 8.6GB
model
arch
llama
·
parameters
8.05B
·
quantization
Q8_0
8.6GB
params
{ "stop": [ "<fim_prefix>", "<fim_middle>", "<fim_suffix>", "<fi
108B
license
Copyright 2024 IBM Licensed under the Apache License, Version 2.0 (the "License"); you may not use t
545B
template
{{ if .System }}System: {{ .System }} {{ end }}{{ if .Prompt }}Question: {{ .Prompt }} {{ end }}Answ
123B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)