Tanuki-8B is a LLM with about 8B parameters that was pre-trained with about 1.3T tokens using full-scratch training.
3,430 Pulls Updated 6 months ago
1929ad282f33 · 128B
{{ if .System }}<s>{{ .System }}
{{ end }}{{ if .Prompt }}### 指示:
{{ .Prompt }}
{{ end }}### 応答:
{{ .Response }}