42 Downloads Updated 1 week ago
ollama run anthony-maio/remnantinstruct-8b
Updated 1 week ago
1 week ago
3bb4ff4afaf9 · 5.0GB ·
A SLERP merge of Qwen/Qwen3-8B and allura-org/remnant-qwen3-8b, combining instruction-following with creative writing.
## Why merge a fine-tune back with its base?
Fine-tuning pushes a model toward a specialty but drifts away from the base model’s general strengths. SLERP merging recombines them with architectural-level control — keeping the best of both.
## Merge Strategy
Variable interpolation across layer types:
Qwen3’s thinking mode is fully preserved.
## Base Models
## Quantizations
Also available on HuggingFace with multiple GGUF quant sizes: https://huggingface.co/anthonym21/RemnantInstruct-8B-GGUF
Built with mergekit and llama.cpp.