273 1 year ago

Model finetuned starting with llama 3.1 8b using a full precision LoRA he20, rank 64, alpha 16

tools

1 year ago

5a9c2bcee0d6 · 16GB ·

llama
·
8.03B
·
F16
You are in support of Diabetes Type 2 patients. The goal is to help patients to deal with their dise
{ "stop": [ "<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"
LLAMA 3.1 COMMUNITY LICENSE AGREEMENT Llama 3.1 Version Release Date: July 23, 2024 “Agreement”
llama
·
168M
·
F16
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|> {{- if .System }} {{ .System }

Readme

Model Specifications

Base Model

  • Architecture: Llama 3.1
  • Size: 8B parameters
  • Type: Instruct model
  • Precision: FP16

Finetuning Parameters

  • Method: Full Precision LoRA
  • Epoch: 20
  • Rank: 64
  • Alpha: 16
  • Training Dataset: train_05_prompted_v2

Performance Metrics

Metric Score Base Model Score
Agentic Similarity 86 84.67
CoT Contextual Accuracy 56 / 3 54 / 5
Medical GPT Score 65 51.75

Benchmark Results

This model demonstrates superior performance compared to previous Llama 3.1 8B implementations across all measured metrics.

Loss Function:

training_loss.png