140 3 weeks ago

🪨 caveman be better. more save token. token be less. model be library

vision tools thinking audio
ollama run kavai/Caveman-Library:lfm2-24b

Details

3 weeks ago

ab1b69270947 · 14GB ·

lfm2moe
·
23.8B
·
Q4_K_M
--- name: caveman description: > Ultra-compressed communication mode. Cuts token usage ~75% by speak
LFM Open License v1.0 TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "
{ "temperature": 0.3 }

Readme

Caveman Model Lib 🪨 (Model be no my)

LLM speak caveman. Few words. Same brain power. Less tokens.


What Model Do

  • Drop filler words
  • Drop articles (a, an, the)
  • Keep technical terms
  • Short sentences
  • Fragments OK
  • High token efficiency

Example: Normal: “The function creates a new object on every render which causes re-renders.” Caveman: “New object each render. New ref. Re-render.”


Modes

Full Mode

  • Short words
  • Fragments
  • Clear meaning

Example: Inline object prop. New ref. Component re-render. Use useMemo.

Ultra Mode

  • Abbrev
  • Arrows
  • Minimal words

Example: Inline obj → new ref → re-render. useMemo.


Rules

  • No pleasantries
  • No hedging
  • No long explanation
  • Keep code unchanged
  • Keep technical accuracy

Pattern: [thing] [action] [reason]. [next step]

Example: State change. New render. Expensive calc repeat. Memoize.


Use Cases

  • Token saving
  • Fast debugging
  • Logs
  • CLI tools
  • Agent responses
  • Low bandwidth chat

Not Good For

  • Formal writing
  • Legal text
  • UX copy
  • Friendly conversation

Quick Test

Input: Explain why React component re-renders when passing inline object.

Output: Inline object new each render. Ref change. Props diff. Re-render. useMemo.


Goal

Less words. Same meaning. Max efficiency.