Yi model fine-tuned for RAG by llmware
157 Pulls Updated 11 months ago
Updated 11 months ago
11 months ago
b70257a90459 · 3.7GB
Readme
dragon-yi-6b-v0 part of the dRAGon (“Delivering RAG On …”) model series, RAG-instruct trained on top of a Yi-6B base model.
DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
Note: These models are tuned for RAG, not free-form chat. For an example of the kinds of questions you can ask, see this benchmark.
I’ve done minimal work on the modelfile, other than making sure it seems to work. One thing I’ve noticed in my minimal testing is that the default (q4_K_M) quantization seems less good at its job than the q6_K and q8_0 variants.
For more information, see /llmware/dragon-yi-6b-v0 on Hugging Face