42 Downloads Updated 1 year ago
This model is a fine-tuned version of allenai/OLMo-1B-hf on the HuggingFaceH4/ultrachat_200k dataset
Updated 1 year ago
1 year ago
88141239c07c · 847MB
{{- if .System }}
<|system|>
{{ .System }}
<|endoftext|>
{{- end }}
<|user|>
{{ .Prompt }}
<|endofte
118B
Readme
zephyr-1b-olmo-sft-qlora
https://ritvik19.github.io/small-llms/
This model is a fine-tuned version of allenai/OLMo-1B-hf on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.3126
(from: https://huggingface.co/Ritvik19/zephyr-1b-olmo-sft-qlora)
Model conversion by https://huggingface.co/Felladrin/gguf-zephyr-1b-olmo-sft-qlora