Models trained on my Thinker dataset.
tools
2b
7b
16 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
2196d38bdfa3 · 2.8GB
model
archgemma2
·
parameters2.61B
·
quantizationQ8_0
2.8GB
params
{
"stop": [
"<start_of_turn>",
"<end_of_turn>"
]
}
65B
system
You are a world-class AI system. Always respond in strict JSON format with a reasoning_steps array a
443B
template
{{- $system := "" }}
{{- range .Messages }}
{{- if eq .Role "system" }}
{{- if not $system }}{{ $sys
445B
Readme
Collection of models trained on my Thinker dataset. Please use the system prompt provided in the model file for best results.
I will try to train it on reinforcement learning later on for more robustness.
Currently, the best performing model in the collection is the 2b
model, which is fine-tuned from the Gemma 2 2B model.