Models trained on my Thinker dataset.
tools
2b
7b
7 Pulls Updated 3 weeks ago
Updated 3 weeks ago
3 weeks ago
2196d38bdfa3 · 2.8GB
model
archgemma2
·
parameters2.61B
·
quantizationQ8_0
2.8GB
params
{"stop":["\u003cstart_of_turn\u003e","\u003cend_of_turn\u003e"]}
65B
system
You are a world-class AI system. Always respond in strict JSON format with a reasoning_steps array a
443B
template
{{- $system := "" }}
{{- range .Messages }}
{{- if eq .Role "system" }}
{{- if not $system }}{{ $sys
445B
Readme
Collection of models trained on my Thinker dataset. Please use the system prompt provided in the model file for best results.
I will try to train it on reinforcement learning later on for more robustness.
Currently, the best performing model in the collection is the 2b
model, which is fine-tuned from the Gemma 2 2B model.