This version of bellman is finetuned from llama-3.1-instruct-8b. It's finetuned for prompt question answering, based on a dataset created from Swedish wikipedia, with a lot of Sweden-centric questions.

tools

25 7 weeks ago

Readme

From neph1/llama-3.1-instruct-bellman-8b-swedish

Model Card for Bellman

This version of bellman is finetuned from llama-3.1-instruct-8b. It’s finetuned for prompt question answering, based on a dataset created from Swedish wikipedia, with a lot of Sweden-centric questions. New from previous versions is questions from a translated code-feedback dataset, as well as a number of stories. It’s not great at generating stories, but better than previosly.

Try out the Q8 version here: https://huggingface.co/spaces/neph1/bellman (cpu)

image/png

Model Details

Training run on 240724:

Step Training Loss Validation Loss
25 1.352200 1.034565
50 1.033600 1.009348
75 1.022400 0.996665
100 1.002900 0.988050
125 1.014600 0.981633
150 1.006300 0.975584
175 0.988800 0.970966
200 0.985300 0.967037
225 0.992400 0.964120
250 0.950000 0.962472
275 0.931000 0.960848
300 0.932000 0.958946 <– picked checkpoint

Training Parameters

per_device_train_batch_size = 4,
gradient_accumulation_steps = 16,
num_train_epochs=3,
warmup_steps = 5,
learning_rate = 3e-5,
logging_steps = 25,
optim = “adamw_8bit”,
weight_decay = 0.01,
lr_scheduler_type = “linear”,
seed = 3407,
per_device_eval_batch_size = 2,
eval_strategy=“steps”,
eval_accumulation_steps = 32,
eval_steps = 25,
eval_delay = 0,
save_strategy=“steps”,
save_steps=50,

Model Description

  • Developed by: Me
  • Funded by: Me
  • Model type: Instruct
  • Language(s) (NLP): Swedish
  • License: llama-3.1
  • Finetuned from model: Llama3.1 Instruct 8b

Model Card Contact

rickard@mindemia.com