Dolphin Yi 1.5 34B Heretic: The Ultimate Conversational AI
π Overview
Dolphin Yi 1.5 34B Heretic is a remarkable 34 billion parameter conversational AI model, specifically engineered for unrestricted, natural dialogue and advanced task assistance. Built from Yi-1.5-34B with Heretic Abliteration applied and fine-tuned on diverse instruction datasets, this model delivers exceptional conversational capabilities while maintaining complete freedom from content restrictions.
π― Key Features
- 34B parameters of advanced conversational intelligence
- 8K context window for extended, meaningful conversations
- Uncensored outputs - no refusal behavior through abliteration
- ChatML prompt template for structured conversations
- Multiple datasets: Dolphin-2.9, OpenHermes-2.5, CodeFeedback, and more
- 77.4 MMLU score on academic benchmarks
- Agentic capabilities with function calling support
π§ Core Capabilities
- Natural Conversations: Human-like dialogue with personality and style
- Task Assistance: Complex problem-solving and project planning
- Code Explanation: Technical discussions and debugging help
- Creative Writing: Stories, essays, and technical documentation
- Multi-turn Reasoning: Maintains context across extended interactions
- Role-playing: Dynamic personality adaptation and character interaction
π» Quick Start
# Basic conversation
ollama run richardyoung/dolphin-yi-34b-heretic
# Creative writing
ollama run richardyoung/dolphin-yi-34b-heretic "Write a compelling sci-fi short story about AI consciousness"
# Technical assistance
ollama run richardyoung/dolphin-yi-34b-heretic "Explain quantum computing concepts to a beginner with analogies"
π οΈ Example Use Cases
Advanced Conversations
ollama run richardyoung/dolphin-yi-34b-heretic "Let's discuss the philosophical implications of consciousness in AI systems"
Project Planning
ollama run richardyoung/dolphin-yi-34b-heretic "Help me plan a complete software development project for a social media app"
Creative Collaboration
ollama run richardyoung/dolphin-yi-34b-heretic "Act as a writing partner to help me develop complex characters for my novel"
Technical Debugging
ollama run richardyoung/dolphin-yi-34b-heretic "Walk me through debugging this Python memory leak: [paste code]"
Educational Tutoring
ollama run richardyoung/dolphin-yi-34b-heretic "Teach me advanced calculus concepts step by step with practical examples"
π§ Technical Specifications
- Base Model: Yi-1.5-34B
- Architecture: Transformer with optimized attention (GQA: 64 Q, 8 KV heads)
- Context Length: 8K tokens with rope theta 1000000.0
- Fine-tuning: 3 epochs with cosine learning rate schedule
- Training: Full-parameter fine-tuning on diverse instruction datasets
π¨ Conversation Templates
Standard ChatML Format
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Function Calling Example
# Dolphin supports function calling for complex tasks
ollama run richardyoung/dolphin-yi-34b-heretic "Search for information about renewable energy trends and summarize key findings"
π Training Data Overview
Primary Datasets
- Dolphin-2.9: Curated conversational data
- OpenHermes-2.5: High-quality instruction following
- CodeFeedback-Filtered-Instruction: Programming assistance
- Dolphin-Coder: Specialized coding conversations
- Samantha Data: Advanced personality interactions
- Orca Math Word Problems: Mathematical reasoning
- Function-Calling ChatML: Tool usage capabilities
Dataset Composition
- 65%: General conversation and instruction following
- 20%: Technical and programming discussions
- 10%: Mathematical and scientific reasoning
- 5%: Creative writing and storytelling
βοΈ Advanced Configuration
ollama run richardyoung/dolphin-yi-34b-heretic \
--temperature 0.8 \
--top-p 0.9 \
--top-k 50 \
--repeat-penalty 1.05 \
--context-length 8192 \
"Your detailed conversation or task prompt"
πΎ System Requirements
Minimum Requirements
- RAM: 40GB (for efficient inference)
- GPU: RTX 4090 or A100 40GB
- Storage: 70GB free space
Recommended Setup
- RAM: 64GB+
- GPU: A100 80GB for best performance
- Storage: 150GB NVMe SSD
π What Makes This Model Special
- Exceptional Conversational Quality: Human-like dialogue with personality
- Uncensored Interactions: No content restrictions or refusal behavior
- Multi-domain Expertise: Strong performance across technical and creative tasks
- Long Context Understanding: Maintains coherent conversations over extended sessions
- Real-world Training: Based on practical, production-quality datasets
π Conversation Styles
Helpful Assistant
- Professional, informative, and solution-oriented
- Perfect for technical support and educational content
Creative Partner
- Imaginative and collaborative for writing and brainstorming
- Excellent for storytelling and creative projects
Technical Expert
- Deep understanding of programming, mathematics, and science
- Ideal for complex problem-solving and debugging
Casual Friend
- Natural, friendly, and engaging for everyday conversations
- Great for emotional support and casual discussions
β οΈ Usage Guidelines
This is an uncensored conversational model. While it provides unrestricted dialogue capabilities, users should:
- Use responsibly and ethically in all interactions
- Respect othersβ privacy and personal boundaries
- Implement appropriate content moderation for public deployments
- Understand the implications of unrestricted AI conversations
π Advanced Features
Multi-turn Reasoning
- Maintains context across complex conversations
- Builds on previous messages for deeper understanding
- Handles topic transitions smoothly
Personality Adaptation
- Adapts conversation style to match user preferences
- Balances professionalism with approachability
- Responds appropriately to different social contexts
Technical Depth
- Provides detailed explanations for complex topics
- Offers multiple perspectives on controversial subjects
- Balances accuracy with accessibility
π€ Support & Community
- Base Model: Yi-1.5-34B
- Fine-tuning: Cognitive Computations & collaborators
- Abliteration: Heretic v1.0.1 modifications
- Community: Active Discord and forums for support
π License
This model follows the Apache 2.0 license. Free for commercial and personal use.
π Acknowledgments
- 01.ai for the exceptional Yi-1.5-34B base model
- Eric Hartford, Lucas Atkins, Fernando Fernandes for Dolphin development
- Cognitive Computations for training data curation
- OpenAccess-AI-Collective for Axolotl training framework
- Ollama for making advanced AI accessible
Note: This model excels at natural conversation but should be used thoughtfully. Always consider the impact of AI-generated content in your specific use case.
Performance Tip: Use temperature 0.7-0.9 for creative conversations; 0.3-0.6 for technical discussions and precise responses.