EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models ranging from 2.4B to 32B parameters, developed and released by LG AI Research.
4,550 Pulls 13 Tags Updated 2 weeks ago
chat model
890 Pulls 1 Tag Updated 7 months ago
NousResearch/Meta-Llama-3.1-8B-Instruct Korean finetuned model with SFT->RLHF->DPO
826 Pulls 2 Tags Updated 4 months ago
Mistral-Nemo-Instruct-2407 korean finetuned model with SFT->DPO
358 Pulls 1 Tag Updated 4 months ago
yanolja eeve korean model
294 Pulls 13 Tags Updated 7 months ago
EEVE-Korean-Instruct-10.8B
139 Pulls 1 Tag Updated 7 months ago
Multi-Lingual, Multi-Functionality, Multi-Granularity bge-m3-korean from upskyy
112 Pulls 1 Tag Updated 2 months ago
야놀자 기반 heegyu EEVE ollama
102 Pulls 1 Tag Updated 2 months ago
The model that was based on llama, specifically designed to excel in Korean through additional training, Developed by NC Research.
97 Pulls 5 Tags Updated 3 months ago
91 Pulls 1 Tag Updated 2 months ago
Google gemma-2-27b-it korean finetuned model with SFT->DPO
88 Pulls 1 Tag Updated 4 months ago
87 Pulls 13 Tags Updated 6 days ago
llama3.1-korean-mrc
75 Pulls 1 Tag Updated 4 months ago
71 Pulls 2 Tags Updated 2 months ago
NousResearch/Hermes-3-Llama-3.1-8B Korean q8 model with CPT->SFT->DPO
53 Pulls 1 Tag Updated 3 months ago
google/gemma-2-27b-it Korean q4 model with CPT->SFT->DPO
42 Pulls 1 Tag Updated 3 months ago
ai-human-lab/EEVE-Korean_Instruct-10.8B-expo
34 Pulls 5 Tags Updated 2 months ago
27 Pulls 1 Tag Updated 2 months ago
NousResearch/Hermes-3-Llama-3.1-70B Korean q4 model with CPT->SFT->DPO
20 Pulls 1 Tag Updated 3 months ago
google/gemma-2-27b-it Korean q8 model with CPT->SFT->DPO
18 Pulls 1 Tag Updated 3 months ago