EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models ranging from 2.4B to 32B parameters, developed and released by LG AI Research.
377.2K Pulls 13 Tags Updated 1 year ago
This model is created based on gemma3:27b. It was created for the purpose of translating English documents into Korean.
34 Pulls 1 Tag Updated 2 months ago
128 Pulls 1 Tag Updated 11 months ago
56 Pulls 1 Tag Updated 8 months ago
26 Pulls 1 Tag Updated 8 months ago
llama-3.2-3B-Q4_K_M-Korean
5,520 Pulls 1 Tag Updated 1 year ago
A Korean fine-tuned version of deepseek-r1 by UNIVA and the Bllossom team.
2,185 Pulls 3 Tags Updated 1 year ago
2,288 Pulls 13 Tags Updated 1 year ago
NousResearch/Meta-Llama-3.1-8B-Instruct Korean finetuned model with SFT->RLHF->DPO
2,540 Pulls 2 Tags Updated 1 year ago
Multi-Lingual, Multi-Functionality, Multi-Granularity bge-m3-korean from upskyy
2,292 Pulls 1 Tag Updated 1 year ago
chat model
2,181 Pulls 1 Tag Updated 1 year ago
llama-3-Korean-Bllossom-8B-Q4_K_M
1,558 Pulls 1 Tag Updated 1 year ago
1,119 Pulls 1 Tag Updated 1 year ago
DNA 1.0 8B Instruct is a state-of-the-art (SOTA) bilingual language model specifically optimized for Korean language, while also maintaining strong English capabilities.
789 Pulls 5 Tags Updated 1 year ago
Mistral-Nemo-Instruct-2407 korean finetuned model with SFT->DPO
549 Pulls 1 Tag Updated 1 year ago
야놀자 기반 heegyu EEVE ollama
505 Pulls 1 Tag Updated 1 year ago
yanolja eeve korean model
570 Pulls 13 Tags Updated 1 year ago
444 Pulls 2 Tags Updated 1 year ago
google/gemma-2-27b-it Korean q4 model with CPT->SFT->DPO
260 Pulls 1 Tag Updated 1 year ago
EEVE-Korean-Instruct-10.8B
284 Pulls 1 Tag Updated 1 year ago