Updated 15 hours ago
ollama run h4rithd/thinker:14b-q8
Updated 15 hours ago
15 hours ago
225672337ac6 · 16GB ·
Thinker is a local reasoning and writing assistant built for structured analysis, planning, documentation, and deeper technical explanations. This model is designed for an OpenClaw + Ollama workflow running on a Mac mini M4 with 24GB unified memory. It is intended to handle the heavier thinking tasks in a local multi-model setup.
Thinker is based on qwen3:14b-q8_0 and is configured for deeper reasoning, structured output, and higher-quality long-form responses. The purpose of Thinker is to be the model you use when the task needs more thought, better organization, or careful explanation. It is not meant to be the fastest model in the setup. It is meant to be the model that produces better planning, clearer reasoning, and more polished writing. Thinker is useful when you want a model to slow down, analyze the problem properly, and produce a well-structured answer.
Thinker works well for:
This model was prepared for local usage with:
OpenClaw + Ollama
Mac mini M4
24GB unified memory
Apple Silicon local AI workflow
The model uses a larger 14B Q8 base, so it is heavier than Buddy. It is better suited for tasks where output quality matters more than raw speed.
Thinker is designed to act as the deep reasoning and writing model in an OpenClaw configuration.Use Thinker when you need a more complete answer, a better breakdown, or a more thoughtful response. Recommended role:
Reasoning model
Writing model
Planning assistant
Documentation assistant
Long-form explanation model
Thinker is configured with a larger context window than Buddy, making it more suitable for longer prompts, larger instructions, extended conversations, and more detailed reasoning tasks. The sampling behavior is tuned to allow thoughtful and structured responses without becoming too random. The temperature is moderate, which helps the model stay useful for both reasoning and writing. The model is allowed to produce longer answers, making it better for documentation, planning, analysis, and long-form technical explanations. The configuration is focused on quality and depth rather than short, fast responses. Thinker is instructed to analyze thoroughly and provide structured, detailed output.
Thinker is best used when the answer needs depth. Use it when the task is too large or too important for a quick assistant. It is especially useful when preparing documentation, comparing technical options, planning a project, writing a blog post, or breaking down a complex workflow. This model is heavier than Buddy, so it should be used intentionally in a 24GB unified memory environment.
ollama run h4rithd/thinker:14b-q8
Analyze the pros and cons of using React, Vite, Node.js, and Markdown for a blog platform.
Write a structured article about building a local AI workflow with Ollama and OpenClaw.
Compare Ollama, LM Studio, llama.cpp, and OpenClaw for local AI usage.
Create a detailed project plan for a Markdown-based blog website.
Explain this architecture and suggest improvements.
Thinker should be used for tasks that require reasoning and structure. Recommended model routing:
h4rithd/buddy:8b = quick answers and lightweight tasks
h4rithd/thinker:14b-q8 = reasoning, writing, planning, and documentation
h4rithd/coder:14b = coding, debugging, and security engineering
This keeps the workflow efficient instead of using the largest model for every task.
Thinker is not trained from scratch. It is a customized Ollama model based on qwen3:14b-q8_0, configured for a specific local AI workflow. Because this model uses a Q8 14B base, it may require more memory and may respond slower than smaller models. That is expected. The tradeoff is better reasoning depth and stronger long-form output. Performance depends on hardware, memory pressure, Ollama settings, OpenClaw configuration, prompt quality, and context size.
Created by Harith Dilshan, also known as h4rithd.
Built for local AI workflows, OpenClaw usage, technical writing, structured reasoning, and Apple Silicon-based productivity.