4 Downloads Updated 7 months ago

unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
quantization_method = “q8_0” to File: ‘unsloth.q8_0.gguf’
Cyber Security Student and TryHackMe Fan. This model has none of this in it, by the way. This model is safe and usable. It’s just a clone of the Meta model with one fake dataset and 1 with numbers, like the Fibonacci sequence. I used two Google Colabs from UnSloth (See below), but I used the same model for both by saving it to my Google Drive along with the LoRa’s so I could retrain the same model.

https://www.kaggle.com/code/davidtkeane/kaggle-llama-3-1-8b-unsloth-notebook
To install Unsloth on your own computer, follow the installation instructions on our Github page here. https://github.com/unslothai/unsloth.git
Join Discord if you need help + ⭐ Star us on Github ⭐
Alpaca Dataset yahma/alpaca-cleaned
To run this, press “Runtime” and press “Run all” on a free Tesla T4 Google Colab instance!
To install Unsloth on your own computer, follow the installation instructions on our Github page here.
You will learn how to do data prep, how to train, how to run the model, & how to save it (eg for Llama.cpp).
[NEW] Llama-3.1 8b, 70b & 405b are trained on a crazy 15 trillion tokens with 128K long context lengths!
[NEW] Try 2x faster inference in a free Colab for Llama-3.1 8b Instruct here
Data Prep yahma/alpaca-cleaned
We now use the Alpaca dataset from yahma, which is a filtered version of 52K of the original Alpaca dataset. You can replace this code section with your own data prep. yahma/alpaca-cleaned
https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
Join Discord if you need help + ⭐ Star us on Github ⭐
Alpaca dataset from vicgalle
To run this, press “Runtime” and press “Run all” on a free Tesla T4 Google Colab instance!
To install Unsloth on your own computer, follow the installation instructions on our Github page here.
You will learn how to do data prep and import a CSV, how to train, how to run the model, & how to export to Ollama!
Unsloth now allows you to automatically finetune and create a Modelfile, and export to Ollama! This makes finetuning much easier and provides a seamless workflow from Unsloth to Ollama!
[NEW] We now allow uploading CSVs, Excel files - try it here by using the Titanic dataset.
Data Prep vicgalle/alpaca-gpt4
We now use the Alpaca dataset from vicgalle, which is a version of 52K of the original Alpaca dataset generated from GPT4. You can replace this code section with your own data prep.
https://colab.research.google.com/drive/19PLOx1-gVBnUNpQemlMdL-cd35zTHJob#Data