EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models ranging from 2.4B to 32B parameters, developed and released by LG AI Research.
500.6K Pulls 13 Tags Updated 1 year ago
This model is created based on gemma3:27b. It was created for the purpose of translating English documents into Korean.
51 Pulls 1 Tag Updated 3 months ago
80 Pulls 1 Tag Updated 10 months ago
44.2K Pulls 13 Tags Updated 1 year ago
Multi-Lingual, Multi-Functionality, Multi-Granularity bge-m3-korean from upskyy
12.7K Pulls 1 Tag Updated 1 year ago
32 Pulls 1 Tag Updated 10 months ago
llama-3.2-3B-Q4_K_M-Korean
5,697 Pulls 1 Tag Updated 1 year ago
A Korean fine-tuned version of deepseek-r1 by UNIVA and the Bllossom team.
2,295 Pulls 3 Tags Updated 1 year ago
NousResearch/Meta-Llama-3.1-8B-Instruct Korean finetuned model with SFT->RLHF->DPO
2,655 Pulls 2 Tags Updated 1 year ago
chat model
2,322 Pulls 1 Tag Updated 2 years ago
llama-3-Korean-Bllossom-8B-Q4_K_M
1,707 Pulls 1 Tag Updated 1 year ago
1,137 Pulls 1 Tag Updated 1 year ago
DNA 1.0 8B Instruct is a state-of-the-art (SOTA) bilingual language model specifically optimized for Korean language, while also maintaining strong English capabilities.
803 Pulls 5 Tags Updated 1 year ago
11 Pulls 1 Tag Updated 10 months ago
yanolja eeve korean model
656 Pulls 13 Tags Updated 1 year ago
555 Pulls 2 Tags Updated 1 year ago
야놀자 기반 heegyu EEVE ollama
548 Pulls 1 Tag Updated 1 year ago
Mistral-Nemo-Instruct-2407 korean finetuned model with SFT->DPO
553 Pulls 1 Tag Updated 1 year ago
EEVE-Korean-Instruct-10.8B
297 Pulls 1 Tag Updated 2 years ago
google/gemma-2-27b-it Korean q4 model with CPT->SFT->DPO
264 Pulls 1 Tag Updated 1 year ago