Updated 2 months ago
ollama run ermwhatesigma420/sigmaAi60K:0.5b
This is the other trained Qwen 0.5b model. But this one is trained on a bigger dataset.
The other 0.5b is trained on a 20k json dataset or a 18.3mb dataset.
While this one 60k as you can tell is triple size of the first dataset, it is around 58mb json dataset. It can answer normal and simple questions.
This one is also trained on my own pc.
I find this one respond better and smarter. It knows simple python or c++ scripts.
Or just download the gguf file
https://huggingface.co/SigmaMogg/SigmaAI/tree/main