4 7 months ago

8B Model - unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit Trained with two datasets.

7 months ago

ee1f6b4e12fa · 4.9GB ·

llama
·
8.03B
·
Q4_K_M
<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|> {{ range .Messages }}{{ if eq .Ro
{ "min_p": 0.1, "num_ctx": 4096, "stop": [ "<|eot_id|>", "<|end_of_text|

Readme

Your Image Badge

Welcome to Rangerbot-8b

Model Name

unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit

quantization_method = “q8_0” to File: ‘unsloth.q8_0.gguf’

About Me and the model.

Cyber Security Student and TryHackMe Fan. This model has none of this in it, by the way. This model is safe and usable. It’s just a clone of the Meta model with one fake dataset and 1 with numbers, like the Fibonacci sequence. I used two Google Colabs from UnSloth (See below), but I used the same model for both by saving it to my Google Drive along with the LoRa’s so I could retrain the same model.

The Colabs are below to use to and learn how to train your own model.

Or Use Kaggie NoteBook

https://www.kaggle.com/code/davidtkeane/kaggle-llama-3-1-8b-unsloth-notebook

Made Rangerbot using UnSloth.

To install Unsloth on your own computer, follow the installation instructions on our Github page here. https://github.com/unslothai/unsloth.git

Join Discord if you need help + ⭐ Star us on Github

Alpaca Dataset yahma/alpaca-cleaned

To run this, press “Runtime” and press “Run all” on a free Tesla T4 Google Colab instance!

To install Unsloth on your own computer, follow the installation instructions on our Github page here.

You will learn how to do data prep, how to train, how to run the model, & how to save it (eg for Llama.cpp).

[NEW] Llama-3.1 8b, 70b & 405b are trained on a crazy 15 trillion tokens with 128K long context lengths!

[NEW] Try 2x faster inference in a free Colab for Llama-3.1 8b Instruct here

Data Prep yahma/alpaca-cleaned

We now use the Alpaca dataset from yahma, which is a filtered version of 52K of the original Alpaca dataset. You can replace this code section with your own data prep. yahma/alpaca-cleaned

https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing

Join Discord if you need help + ⭐ Star us on Github

Alpaca dataset from vicgalle

To run this, press “Runtime” and press “Run all” on a free Tesla T4 Google Colab instance!

To install Unsloth on your own computer, follow the installation instructions on our Github page here.

You will learn how to do data prep and import a CSV, how to train, how to run the model, & how to export to Ollama!

Unsloth now allows you to automatically finetune and create a Modelfile, and export to Ollama! This makes finetuning much easier and provides a seamless workflow from Unsloth to Ollama!

[NEW] We now allow uploading CSVs, Excel files - try it here by using the Titanic dataset.

Data Prep vicgalle/alpaca-gpt4

We now use the Alpaca dataset from vicgalle, which is a version of 52K of the original Alpaca dataset generated from GPT4. You can replace this code section with your own data prep.

https://colab.research.google.com/drive/19PLOx1-gVBnUNpQemlMdL-cd35zTHJob#Data