11 Downloads Updated 1 week ago
Updated 1 week ago
1 week ago
e120d5a1c8d2 · 292MB ·
This model is fine-tuned from Gemma 3 (270M) to solve the common “newline injection” and “code interpretation” issues often faced when using small LLMs for clipboard tasks.
Unlike general-purpose chat models, this model has been trained on a specific “Visual Anchor” pattern. It prevents the model from:
Hallucinating: It won’t add commentary like “Here is the text”.
Executing Code: If you paste Python or HTML, it won’t try to run it or wrap it in Markdown code blocks.
Translating: It preserves the original language (CN/EN/JP/Mixed) exactly as provided.
The training dataset covers mixed Windows (\r\n) and Linux (\n) line endings, ensuring perfect compatibility between different operating systems and applications (e.g., copying from Notepad to VS Code). Its lightweight design (270M) allows for instant inference on any CPU, making it the perfect “Advanced Paste” engine.
Recommended Prompt Structure: To trigger the strip() behavior, use the Raw/Final anchor format.
curl -X POST http://127.0.0.1:11434/api/chat -d '{
"model": "qllama/gemma3-strip:latest",
"stream": false,
"messages": [
{
"role": "user",
"content": "Trim whitespace:\n \n <div class=\"test\">\n Content\n </div> \n\n"
}
]
}'
The model returns pure text starting immediately from the first valid character. It strictly removes outer padding but preserves internal indentation and line breaks, making it ideal for pasting code, logs, or formatted text into editors.