40 1 month ago

A small, snarky model. Can run on your Pi 5.

vision tools 1b 4b

Models

View all →

Readme

Tinysnark is a small snarky model. Great for home projects using a Raspberry Pi, especially if you’re looking to add a dash of snark to your project.

Don’t expect much more than snark, though, as this is a language model with 1 billion parameters (at its smallest), not a frontier model. I suggest running a larger model if you want better roasts.

A 4b model that I’ve attached also runs reasonably well on a Pi 5 with more ram. Consult the gemma3 page for specific hardware requirements. (Runs on CPU and/or single GPU).

For those of you who don’t know how to install or update ollama on your Pi, run this command in your terminal (CTRL + SHIFT + V to paste into the terminal): curl -fsSL https://ollama.com/install.sh | sh

1b model: ollama run mcgdj/tinysnark:1b 4b model: ollama run mcgdj/tinysnark:4b

Powersnark is a much more advanced model, at the cost of some longer responses and reasoning. If you have ideas on how to develop Powersnark further, find me at focodj@icloud.com.

Powersnark comes in 3b and 8b models. Both should work fine, but the larger model is slower and smarter. These use the Cogito models instead of Gemma, so the responses are different.

3b model: ollama run mcgdj/tinysnark:powersnark-3b 8b model: ollama run mcgdj/tinysnark:powersnark-8b NOTE: The 8b model requires a Pi 5 with 8gb or more ram to work.

Lastly, granite3.2 is apparently great at snark. Run ollama run mcgdj/tinysnark:granite-2b for a small, capable model by IBM.