3 Downloads Updated yesterday
ollama run treyrowell1826/qwen3-pinion:q8_0
Updated yesterday
yesterday
c32501c3f3e9 · 2.2GB ·
Ollama runtime distribution of qwen3-pinion, a merged Qwen3 1.7B SFT checkpoint.
This package is the runtime downstream of the canonical full-weights and GGUF artifact releases and is intended for local inference through Ollama.
Canonical full weights: https://huggingface.co/Somnus-Sovereign-Systems/qwen3-pinion
Full-weights DOI: 10.57967/hf/7965 (https://doi.org/10.57967/hf/7965)
Canonical GGUF artifacts: https://huggingface.co/Somnus-Sovereign-Systems/qwen3-pinion-gguf
GGUF DOI: 10.57967/hf/7966 (https://doi.org/10.57967/hf/7966)
[!WARNING] SFT-only release with reduced safety alignment versus strongly post-aligned systems. Treat outputs as untrusted and do not use this model for harmful, high-risk, or safety-critical workflows.
This Ollama package is a downstream runtime distribution of the following artifact chain:
Somnus-Sovereign-Systems/qwen3-pinion)qwen3-pinion-f16.ggufQ8_0, Q5_K_M, Q4_K_MPublished tags:
treyrowell1826/qwen3-pinion:f16treyrowell1826/qwen3-pinion:q8_0treyrowell1826/qwen3-pinion:q5_k_mtreyrowell1826/qwen3-pinion:q4_k_mQuick use:
”`bash ollama run treyrowell1826/qwen3-pinion:q5_k_m
Artifact Hierarchy
f16 is the canonical GGUF source artifact used for this runtime distribution
q8_0, q5_k_m, and q4_k_m are downstream quantized runtime variants of that f16 source
Context and Template
This model uses ChatML-style turns and retains the exported chat template lineage from the base artifact chain.
Stop sequences:
<|im_end|>
<|im_start|>
<|endoftext|>
This model family supports a 40,960-token context window.
Provenance
Artifact pipeline:
LoRA SFT (rlhf.py)
Merge into standalone checkpoint (merge_sft_lora.py)
Export merged weights to canonical GGUF f16 (export_gguf.py)
Generate downstream GGUF quantized variants from the f16 source
Package those GGUF artifacts into Ollama runtime tags
Upstream code provenance DOI: 10.5281/zenodo.18607464 URL: https://doi.org/10.5281/zenodo.18607464
Licensing Boundary
This package distributes downstream runtime model artifacts only.
The underlying model artifacts follow the licensing boundaries defined by the upstream artifact repositories:
Full weights: Somnus-Sovereign-Systems/qwen3-pinion
GGUF artifacts: Somnus-Sovereign-Systems/qwen3-pinion-gguf
The training, merge, export, and pipeline code used to produce these artifacts is licensed separately under GNU GPL v3.0 (GPLv3) in its respective code repository:
https://github.com/calisweetleaf/Reinforcement-Learning-Full-Pipeline/blob/main/LICENSE
Users must also comply with the applicable upstream terms for:
Somnus-Sovereign-Systems/qwen3-pinion
Qwen/Qwen3-1.7B
Magpie-Align/Magpie-Pro-300K-Filtered
Citation
@misc{rowell2026_qwen3_pinion_ollama, author = {Rowell, Christian Trey Levi}, title = {qwen3-pinion}, year = {2026}, url = {https://ollama.com/treyrowell1826/qwen3-pinion}, note = {Ollama runtime distribution of the qwen3-pinion model family}, publisher = {Ollama} }