
-
gemma3_27b_pml_multiPDF_q4k_m
This is a 8 bit quantized version of a gemma3:27b fine tuned with YagCed/Aveva_PML_test on HF. If you're from Aveva and want this model to be removed from public view, please let me know.
55 Pulls 1 Tag Updated 3 months ago
-
gemma3_1b_spiders
A very small test: gemma3:1b fine tuned with a dataset obsessed with spiders. As a result, this model puts spiders in all its answers. This is useless: just a pet project to learn to generate a dataset and fine tune a small model.
35 Pulls 1 Tag Updated 4 months ago
-
gemma3_1b_scale_relativity
A very small test: gemma3:1b fine tuned with a dataset created with gemma3:27b from an article by Laurent Nottale on Scale Relativity. This is pretty much useless, i did it only to learn to generate a dataset and fine tune a small model.
12 Pulls 1 Tag Updated 4 months ago
-
holo1_7B_f16tools
10 Pulls 1 Tag Updated 2 months ago
-
gemma3_27b_PML_test01
Testing fine tuning gemma3:27b with a single pdf about PML.
5 Pulls 1 Tag Updated 3 months ago
-
holo1_7B_q8_0tools
2 Pulls 1 Tag Updated 2 months ago