1,542 1 month ago

ollama run bazobehram/qwen3-14b-claude-4.5-opus-high-reasoning

Models

View all →

Readme

# Qwen3-14B Claude 4.5 Opus High-Reasoning (GGUF)

Qwen3-14B-Claude-4.5-Opus-High-Reasoning is a high-performance 14.8B parameter model optimized for deep logical inference and complex problem-solving. It is a fine-tuned version of the Qwen3-14B base model, utilizing a specialized dataset distilled from Claude 4.5 Opus to replicate its sophisticated step-by-step reasoning patterns.

## ๐Ÿš€ Key Features

* **Distilled Reasoning:** Inherits the high-level logic, planning, and multi-step derivation capabilities of Claude 4.5 Opus.
* **Chain-of-Thought (CoT):** Explicitly trained to use `<thought>` blocks to "think" through problems internally before delivering a final answer, significantly reducing hallucinations.
* **Large Context Window:** Supports up to 40,000 tokens, making it ideal for deep document analysis and long-form coding projects.
* **Optimized Quantization:** Provided in Q4_K_M (9.0GB), offering a perfect balance between intelligence and speed for local deployment on consumer GPUs.

## ๐Ÿง  How to Use

For the best results, this model requires a system prompt that encourages its internal reasoning process.

### Quick Start

```bash
ollama run bazobehram/qwen3-14b-claude-4.5-opus-high-reasoning

Recommended System Prompt

You can set this in your Modelfile or pass it during API calls:

โ€œYou are a helpful assistant. For every query, provide a detailed, step-by-step reasoning process within the thought block before giving the final answer.โ€

๐Ÿ“Š Model Details

  • Parameters: 14.8 Billion
  • Context Length: 40,960 tokens
  • Quantization: Q4_K_M (GGUF)
  • Architecture: Qwen2.5/Qwen3
  • Primary Use Cases: Advanced coding, mathematical proofs, scientific analysis, and logical derivation.

๐Ÿ’ป Hardware Requirements

  • VRAM: 12GB+ (Recommended) or 8GB (Minimum with context offloading).
  • System RAM: 16GB+
  • GPU: Works best on NVIDIA RTX 3060โ„4070 (12GB) or Apple M-series (16GB+ Unified Memory).

โš–๏ธ Attribution & Credits

  • Original Model & Dataset: Developed by TeichAI (Hugging Face Repository)
  • Base Model: unsloth/Qwen3-14B
  • Distillation Source: Claude 4.5 Opus (Anthropic)
  • Ported to Ollama by: bazobehram