244 Downloads Updated 6 months ago
TLDR: Creative, reasoning model available: molbal/CRA-V1-Guided-7B on Ollama Hub and Hugging Face.
The Guided model is available on Ollama Hub (7B) and Hugging Face (7B). The guided model takes guidance along with the context, which directly affects the thought process and the final generated text.
For best results, please keep the following prompt format and the task description static.
### Task: Understand how the story flows, what motivations the characters have and how they will interact with each other and the world as a step by step thought process before continuing the story. Keep the guidance in mind when writing the story.
### Guidance: {guidance}
### Context:{context}
The model will reliably respond in the following format:
<reasoning>
Chain of thought.
</reasoning>
<answer>
Text completion
</answer>
This post presents a methodology for fine-tuning large language models to improve context-aware story continuation by incorporating reasoning steps. The approach leverages publicly available books from the Project Gutenberg corpus, processes them into structured training data, and fine-tunes models like Qwen2.5 Instruct using a cost-effective pipeline (qLoRA). The resulting models demonstrate improved story continuation capabilities, generating a few sentences at a time while maintaining narrative coherence. The fine-tuned models are made available in GGUF format for accessibility and experimentation. This work is planned to be part of writer-assistant tools (to be developed and published later) and encourages community feedback for further refinement.
While text continuation is literally the main purpose of LLMs, story continuation is still a challenging task, as it requires understanding narrative context, characters’ motivations, and plot progression. While existing models can generate text, they often lack the ability to progress the story’s flow just in the correct amount when continuing it, they often do nothing to progress to plot, or too much in a short amount of time. This post introduces a fine-tuning methodology that combines reasoning steps with story continuation, enabling models to better understand context and produce more coherent outputs. The approach is designed to be cost-effective, leveraging free and low-cost resources while only using public domain or synthetic training data.
The fine-tuned models demonstrated improvements in story continuation tasks: - Contextual Understanding: The models effectively used reasoning steps to understand narrative context before generate continuations. - Coherence: Generated continuations were more coherent and aligned with the story’s flow compared to baseline models. - Efficiency: The 7B model with 16k context fully offloads to my laptop’s GPU (RTX 3080 8GB) and manages
Cost-Effective: The use of free and low-cost resources makes the approach accessible to a wide audience.
Scalable: The methodology can be applied to larger datasets and models for further improvements.
Practical: The fine-tuned models are lightweight and compatible with consumer hardware, enabling real-world applications.
Training data Random books training dataset is published at: https://huggingface.co/datasets/molbal/reasoning-story-completion
Note: For the published models I cherry-picked books to serve as corpus including some of my own unpublished writing.
Dataset Bias: The use of pre-LLM-era books may introduce biases or outdated language patterns.
Reasoning Quality: The quality of generated reasoning depends on the output of DeepSeek V3 model, which may carry its own biases and imperfections.