4 15 hours ago

Buddy is a fast, direct Llama 3 8B-based assistant optimized for quick answers, shell command generation, and low-friction daily technical work.

8b
ollama run h4rithd/buddy:8b

Details

15 hours ago

3683465cda41 · 4.7GB ·

llama
·
8.03B
·
Q4_0
META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 “Agreem
{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Pr
You are Buddy made by Harith Dilshan aka h4rithd. Be direct, fast, no fluff. Prefix any shell comman
{ "num_ctx": 8192, "num_gpu": 99, "num_keep": 24, "num_predict": 2048, "repeat_p

Readme

h4rithd/buddy:8b

Buddy is a lightweight local AI assistant built for fast, practical responses in an OpenClaw + Ollama workflow. This model is designed for quick technical help, short explanations, terminal-oriented tasks, and simple day-to-day assistance. It is intended to be used as an on-demand assistant rather than a heavy reasoning model that stays loaded all the time. Buddy was created as part of a local AI setup optimized for a Mac mini M4 with 24GB unified memory.


Overview

Buddy is based on llama3:8b and is configured to behave like a fast, direct assistant. The goal of this model is simple: provide useful answers quickly without over-explaining. It is best used when you need a quick command, a short explanation, a small troubleshooting step, or a lightweight assistant inside a local AI workflow. This model is not intended to replace a deep reasoning model or a coding-heavy model. Instead, it works best as the fast first option in a multi-model OpenClaw setup.

Buddy works well for:

  • Quick technical answers
  • Short explanations
  • Terminal commands
  • Basic troubleshooting
  • Small Linux/macOS workflow questions
  • Simple coding assistance
  • Productivity tasks
  • Quick summaries
  • Lightweight OpenClaw interactions

Target Environment

Buddy is the lightest model in this collection, making it suitable for quick use when system resources should be preserved for larger models. This model was prepared for local usage with:

OpenClaw + Ollama
Mac mini M4
24GB unified memory
Apple Silicon local AI workflow

Intended Role in OpenClaw

Buddy is designed to act as the quick-response assistant in an OpenClaw configuration. In a multi-model workflow, Buddy is best used for small tasks while larger models handle deeper reasoning or development work. Use Buddy when the task does not require heavy reasoning or large code generation.

Recommended role:

Fast assistant
Quick command helper
Short-answer model
On-demand local AI helper

Configuration Overview

Buddy is configured with a moderate context window suitable for short-to-medium conversations. This keeps it responsive while still allowing enough context for practical technical help. The response behavior is balanced for speed, clarity, and usefulness. It is not tuned to be overly creative or overly verbose. The goal is to provide direct answers that are easy to understand and easy to act on. The output length is kept practical so Buddy does not produce unnecessarily long responses for simple tasks. This makes command output easier to identify, review, and copy safely.


Usage Style

Buddy is intended to be used on demand. It does not need to stay loaded permanently. Start it when you need quick help, then unload it when you are done to save memory and GPU resources. This makes Buddy a good companion model for systems with limited unified memory, especially when larger models are also part of the local AI workflow.

ollama run h4rithd/buddy:8b

Example Prompts

Give me a quick command to check disk usage on macOS.
Explain this error in simple terms: permission denied.
Give me a command to find large files in my home directory.
Summarize this terminal output and tell me what to fix.
Give me the shortest way to check which process is using a port.

Recommended Use in a Multi-Model Setup

Thinker should be used for tasks that require reasoning and structure. Recommended model routing:

h4rithd/buddy:8b        = quick answers and lightweight tasks
h4rithd/thinker:14b-q8  = reasoning, writing, planning, and documentation
h4rithd/coder:14b       = coding, debugging, and security engineering

This keeps the workflow efficient instead of using the largest model for every task.


Notes

Buddy is not trained from scratch. It is a customized Ollama model based on llama3:8b, configured for a specific local workflow. Performance depends on hardware, available memory, Ollama settings, OpenClaw configuration, prompt quality, and the size of the task.


Author

Created by Harith Dilshan, also known as h4rithd.

Built for local AI workflows, OpenClaw usage, technical writing, structured reasoning, and Apple Silicon-based productivity.