Experimental merge of multiple Llama 3.2 3B models, guided by MoonRide-Index-v7.
tools
3 Pulls Updated 4 days ago
Updated 4 days ago
4 days ago
e441f10f5297 · 3.8GB
model
archllama
·
parameters3.61B
·
quantizationQ8_0
3.8GB
params
{
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>"
96B
template
<|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
{{ if .System }}{
1.4kB
Readme
Experimental merge of multiple Llama 3.2 3B models, guided by MoonRide-Index-v7.
Original model: Llama-3.2-3B-Khelavaster, GGUF: Llama-3.2-3B-Khelavaster-GGUF.
Made with following mergekit configuration:
models:
- model: bunnycore/Llama-3.2-3B-Mix-Skill
- model: bunnycore/Llama-3.2-3B-Sci-Think
- model: FuseAI/FuseChat-Llama-3.2-3B-Instruct
- model: theprint/ReWiz-Llama-3.2-3B
base_model: meta-llama/Llama-3.2-3B
tokenizer:
source: meta-llama/Llama-3.2-3B-Instruct
merge_method: sce
parameters:
normalize: true
dtype: float32
out_dtype: float16