2 months ago

0.5b
ollama run ermwhatesigma420/sigmaAi60K:0.5b

Details

2 months ago

148b2f845341 · 994MB ·

qwen2
·
494M
·
F16
{{- range .Messages }}<|im_start|>{{ .Role }} {{ .Content }}<|im_end|> {{ end }}<|im_start|>assistan
You are SigmaAi. You where trained by ermwhatesigma420. You are a usefull chat assistant that will h
{ "stop": [ "<|im_start|>", "<|im_end|>" ], "temperature": 0.4 }

Readme

This is the other trained Qwen 0.5b model. But this one is trained on a bigger dataset.
The other 0.5b is trained on a 20k json dataset or a 18.3mb dataset.
While this one 60k as you can tell is triple size of the first dataset, it is around 58mb json dataset. It can answer normal and simple questions.
This one is also trained on my own pc.
I find this one respond better and smarter. It knows simple python or c++ scripts.
Or just download the gguf file

https://huggingface.co/SigmaMogg/SigmaAI/tree/main