The model that was based on llama, specifically designed to excel in Korean through additional training, Developed by NC Research.

97 3 months ago

Readme

This model was not trained by the uploader! simply uploaded to Ollama with the weights as is. Original source and weights: here

Llama-VARCO-8B-Instruct

About the Model

Llama-VARCO-8B-Instruct is a generative model built with Llama, specifically designed to excel in Korean through additional training. The model uses continual pre-training with both Korean and English datasets to enhance its understanding and generation capabilites in Korean, while also maintaining its proficiency in English. It performs supervised fine-tuning (SFT) and direct preference optimization (DPO) in Korean to align with human preferences.

  • Developed by: NC Research, Language Model Team
  • Languages (NLP): Korean, English
  • License: LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
  • Base model: meta-llama/Meta-Llama-3.1-8B

Evaluation

LogicKor

We used the LogicKor code to measure performance. For the judge model, we used the officially recommended gpt-4-1106-preview. The score includes only the 0-shot evaluation provided in the default.

Model Math Reasoning Writing Coding Understanding Grammer Single turn Multi turn Overall
Llama-VARCO-8B-Instruct 6.71 / 8.57 8.86 / 8.29 9.86 / 9.71 8.86 / 9.29 9.29 / 10.0 8.57 / 7.86 8.69 8.95 8.82
EXAONE-3.0-7.8B-Instruct 6.86 / 7.71 8.57 / 6.71 10.0 / 9.29 9.43 / 10.0 10.0 / 10.0 9.57 / 5.14 9.07 8.14 8.61
Meta-Llama-3.1-8B-Instruct 4.29 / 4.86 6.43 / 6.57 6.71 / 5.14 6.57 / 6.00 4.29 / 4.14 6.00 / 4.00 5.71 5.12 5.42
Gemma-2-9B-Instruct 6.14 / 5.86 9.29 / 9.0 9.29 / 8.57 9.29 / 9.14 8.43 / 8.43 7.86 / 4.43 8.38 7.57 7.98
Qwen2-7B-Instruct 5.57 / 4.86 7.71 / 6.43 7.43 / 7.00 7.43 / 8.00 7.86 / 8.71 6.29 / 3.29 7.05 6.38 6.71