gemma2-2b-alpaca_july_31-1_epoch-unsloth.Q4_K_M

2B

1 Pull Updated 7 weeks ago

Readme

Gemma 2 2b (July 31st 2024) fine tune on a custom sentiment analysis dataset optimized for symbolic reasoning/ the idea is to unhobble the performance, 2b performs remarkably well for the size. Q4 works but has higher variance and makes math mistakes, Q5 is in between Q8 which has the lowest variance and comprehensive outputs. These are very fast models but the reasoning step adds a bit of time to the generations/ tradeoff is that you get a nice markdown reasoning report and potentially higher performance.

dataset: https://huggingface.co/datasets/seandearnaley/symbolic_sentiment_v1

inspired by: https://arxiv.org/pdf/2405.18357

see this link for free article on how we fine-tuned these datasets:

Elevating Sentiment Analysis https://medium.com/@seandearnaley/elevating-sentiment-analysis-ad02a316df1d