Tanuki-8B is a LLM with about 8B parameters that was pre-trained with about 1.3T tokens using full-scratch training.

8B

933 Pulls Updated 2 weeks ago

1929ad282f33 · 128B
{{ if .System }}<s>{{ .System }} {{ end }}{{ if .Prompt }}### 指示: {{ .Prompt }} {{ end }}### 応答: {{ .Response }}