225 Downloads Updated 1 year ago
Updated 1 year ago
1 year ago
653c92adcf8f · 2.8GB ·
Using Large Language Models (LLMs) in education presents unique challenges. Typically, LLMs are designed to provide direct answers to questions, which can hinder students’ critical thinking and self-discovery skills. To address this, we focus on fine-tuning LLMs to facilitate Socratic interactions. Instead of giving straightforward answers, these models guide students to explore and find the answers themselves. We achieve this through Direct Preference Optimization (DPO). We test our approach with diverse datasets, including various educational materials and Socratic dialogues. Using advanced models like GPT-4o for evaluation, our results show that DPO successfully fine-tunes LLMs for Socratic dialogue, enhancing their educational value.
Check out training pipeline at GitHub - socratic-llm.
Original model available at HuggingFace Hub.
You can learn more about our project at Fine Tuning a Large Language Model for Socratic Interactions, or read our paper.