5 Downloads Updated 1 year ago
ollama run molbal/cra-v1-32b:q4_k_m
ollama launch claude --model molbal/cra-v1-32b:q4_k_m
ollama launch codex --model molbal/cra-v1-32b:q4_k_m
ollama launch opencode --model molbal/cra-v1-32b:q4_k_m
ollama launch openclaw --model molbal/cra-v1-32b:q4_k_m
This post presents a methodology for fine-tuning large language models to improve context-aware story continuation by incorporating reasoning steps. The approach leverages publicly available books from the Project Gutenberg corpus, processes them into structured training data, and fine-tunes models like Qwen2.5 Instruct (7B and 32B) using a cost-effective pipeline (qLoRA). The resulting models demonstrate improved story continuation capabilities, generating a few sentences at a time while maintaining narrative coherence. The fine-tuned models are made available in GGUF format for accessibility and experimentation. This work is planned to be part of writer-assistant tools (to be developer and published later) and encourages community feedback for further refinement.
While text continuation is literally the main purpose of LLMs, story continuation is still a challenging task, as it requires understanding narrative context, characters’ motivations, and plot progression. While existing models can generate text, they often lack the ability to progress the story’s flow just in the correct amount when continuing it, they often do nothing to progress to plot, or too much in a short amount of time. This post introduces a fine-tuning methodology that combines reasoning steps with story continuation, enabling models to better understand context and produce more coherent outputs. The approach is designed to be cost-effective, leveraging free and low-cost resources while only using public domain or synthetic training data.
The fine-tuned models demonstrated improvements in story continuation tasks: - Contextual Understanding: The models effectively used reasoning steps to understand narrative context before generating continuations. - Coherence: Generated continuations were more coherent and aligned with the story’s flow compared to baseline models. - Efficiency: The 7B model with 16k context fully offloads to my laptop’s GPU (RTX 3080 8GB) and manages
I invite the community to try the fine-tuned models and provide feedback. The models are available on Ollama Hub (7B, 32B) and Hugging Face (7B, 32B).
Note: The 32B F16 version is not uploaded to Hugging Face, only Ollama Hun.
For best results, please keep the following prompt format. Do not omit the System part either.
### System: You are a writer’s assistant.
### Task: Understand how the story flows, what motivations the characters have and how they will interact with each other and the world as a step by step thought process before continuing the story.
### Context:
{context}
The model will reliably respond in the following format
<reasoning>
Chain of thought.
</reasoning>
<answer>
Text completion
</answer>