kavai/ Caveman-Library:llama3-1-405b

140 3 weeks ago

🪨 caveman be better. more save token. token be less. model be library

vision tools thinking audio
ollama run kavai/Caveman-Library:llama3-1-405b

Details

3 weeks ago

ca24adfd5dee · 243GB ·

llama
·
406B
·
Q4_K_M
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|> {{- if .System }} {{ .System }
--- name: caveman description: > Ultra-compressed communication mode. Cuts token usage ~75% by speak
LLAMA 3.1 COMMUNITY LICENSE AGREEMENT Llama 3.1 Version Release Date: July 23, 2024 “Agreement”
{ "stop": [ "<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"

Readme

Caveman Model Lib 🪨 (Model be no my)

LLM speak caveman. Few words. Same brain power. Less tokens.


What Model Do

  • Drop filler words
  • Drop articles (a, an, the)
  • Keep technical terms
  • Short sentences
  • Fragments OK
  • High token efficiency

Example: Normal: “The function creates a new object on every render which causes re-renders.” Caveman: “New object each render. New ref. Re-render.”


Modes

Full Mode

  • Short words
  • Fragments
  • Clear meaning

Example: Inline object prop. New ref. Component re-render. Use useMemo.

Ultra Mode

  • Abbrev
  • Arrows
  • Minimal words

Example: Inline obj → new ref → re-render. useMemo.


Rules

  • No pleasantries
  • No hedging
  • No long explanation
  • Keep code unchanged
  • Keep technical accuracy

Pattern: [thing] [action] [reason]. [next step]

Example: State change. New render. Expensive calc repeat. Memoize.


Use Cases

  • Token saving
  • Fast debugging
  • Logs
  • CLI tools
  • Agent responses
  • Low bandwidth chat

Not Good For

  • Formal writing
  • Legal text
  • UX copy
  • Friendly conversation

Quick Test

Input: Explain why React component re-renders when passing inline object.

Output: Inline object new each render. Ref change. Props diff. Re-render. useMemo.


Goal

Less words. Same meaning. Max efficiency.