A Korean fine-tuned version of deepseek-r1 by UNIVA and the Bllossom team.
8b
70b
672 Pulls Updated 8 weeks ago
Updated 8 weeks ago
8 weeks ago
556304501de2 · 43GB
model
archllama
·
parameters70.6B
·
quantizationQ4_K_M
43GB
params
{
"stop": [
"<|begin▁of▁sentence|>",
"<|end▁of▁sentence|>",
"<|U
132B
template
{{- if .System }}{{ .System }}{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice
387B
license
MIT License
Copyright (c) 2023 DeepSeek
Permission is hereby granted, free of charge, to any perso
1.1kB
Readme
This model is a fine-tuned for Koreanl developed by UNIVA and the Bllossom team.
This is a model where most instances of Chinese mixing into Korean responses have been resolved.
You can find the original model on Hugging Face at the following link:
* DeepSeek-llama3.3-Bllossom-70B
* DeepSeek-llama3.1-Bllossom-8B
이 모델은 UNIVA와 Bllossom 팀이 개발한 한국어용으로 미세 조정된 모델입니다.
한글로 응답을 하는 경우 중국어가 섞여나오는 부분이 대부분 해소된 모델입니다.
다음 링크에서 Hugging Face의 원래 모델을 찾을 수 있습니다.
* DeepSeek-llama3.3-Bllossom-70B
* DeepSeek-llama3.1-Bllossom-8B