7 Downloads Updated 4 months ago
This repository contains the GGUF version of the yarenty/llama32-datafusion-instruct model, quantized for efficient inference on CPU and other compatible hardware.
For full details on the model, including its training procedure, data, intended use, and limitations, please see the full model card.
Q4_K_M (Please verify and change if different)This model follows the same instruction prompt template as the base model:
### Instruction:
{Your question or instruction here}
### Response:
These files are compatible with tools like llama.cpp and Ollama.
```bash
ollama pull jaro/llama32-datafusion-instruct
ollama run jaro/llama32-datafusion-instruct "How do I use the Ballista scheduler?"
```
If you use this model, please cite the original base model:
@misc{yarenty_2025_llama32_datafusion_instruct,
author = {yarenty},
title = {Llama 3.2 DataFusion Instruct},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/yarenty/llama32-datafusion-instruct}}
}
For questions or feedback, please open an issue on the Hugging Face repository or the source GitHub repository.