Caveman
Cut ~75% of output tokens by having the agent talk like caveman — full technical accuracy, zero fluff. Intensity levels: lite, full, ultra, and classical Chinese (文言文).
A community skill by Julius Brussee that makes your AI agent communicate like a caveman — stripping articles, filler, hedging, and pleasantries while preserving every byte of technical substance. Benchmarked at 65% average output token savings (up to 87%). Ships with caveman-commit for terse commit messages, caveman-review for one-line PR comments, and caveman-compress which cuts CLAUDE.md / memory file read tokens by ~46% each session. Works on Claude Code, Cursor, Windsurf, Copilot, Cline, Codex, Gemini CLI, and 40+ more agents.
Added Apr 9, 2026
$npx skills add JuliusBrussee/caveman What This Skill Can Do
Concrete capabilities you get when you install this skill.
Reduce output tokens ~65% on average (up to 87%) with no loss in technical accuracy
Switch intensity on the fly: lite (drop filler), full (classic caveman), ultra (maximum compression)
文言文 (Wenyan) mode — classical Chinese literary compression for maximum brevity
caveman-commit: generate terse, conventional-commits-style messages ≤50 char subject
caveman-review: one-line PR comments (e.g. L42: 🔴 bug: user null. Add guard.)
caveman-compress: compress CLAUDE.md and memory files ~46% — fewer input tokens every session
Auto-clarity mode for destructive ops, security warnings, and multi-step sequences
Works on Claude Code, Cursor, Windsurf, Copilot, Cline, Codex, Gemini CLI, and 40+ more agents
Real Conversations
See how the Caveman skill responds to actual requests. These are representative examples of what your AI agent will produce.
/caveman
Why is my React component re-rendering every time the parent renders, even though the props haven't changed?
Caveman mode: full. Active every response.
Component re-render each parent render = prop reference change.
Root cause: Object/array/function prop created inline → new reference each render → React shallow compare see different → re-render.
// BAD — new object every render
<Child config={{ timeout: 5000 }} />
// GOOD — stable reference
const config = useMemo(() => ({ timeout: 5000 }), []);
<Child config={config} />
Also check:
useCallbackmissing on function props- Context value not memoized → all consumers re-render
- Redux
useSelectorreturning new object each call → addshallowEqual
Quick diagnosis:
import { useRef } from 'react';
function Child({ config }) {
const renders = useRef(0);
console.log('render #' + ++renders.current, config);
// ...
}
Log shows same value, different reference? → useMemo fix. Value actually changing? → parent has bug.
React DevTools Profiler → "Why did this render?" tab → shows exact prop that changed.
Get Started
How to install and use this skill in your preferred environment.
Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase.
Models & Context
Which AI models and context windows work best with this skill.
Recommended Models
Works on all models — the skill reduces output tokens regardless of model capability. Biggest absolute savings come from verbose models (Opus, GPT-4o) where the uncompressed output is longest. Smaller models (Haiku, GPT-4o-mini) produce shorter output already so savings are proportionally lower.
Context Window
The SKILL.md is tiny (~2KB). Negligible context overhead — this is the point.
Pro tips for best results
Be specific
Include numbers — users, budget, RPS — so the skill can size the architecture.
Share constraints
Compliance needs, team size, and existing stack all improve the output.
Iterate
Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.
Combine skills
Pair with companion skills below for end-to-end coverage.
Works Great With
These skills complement Caveman for end-to-end coverage. Install them together for better results.
Good to Know
Advanced guide and reference material for Caveman. Background, edge cases, and patterns worth understanding.
Contents
Output tokens vs. input tokens
Caveman cuts output tokens — what the model writes back to you. It does not affect input tokens (what you send or what gets loaded from CLAUDE.md, memory files, etc.).
For input token savings, use caveman-compress: it rewrites your CLAUDE.md and memory files into caveman-speak so Claude reads less at session start (~46% savings per file). Both together give you the full picture.
Technical accuracy is not compressed
The skill explicitly preserves: technical terms, error messages (quoted exactly), code blocks, file paths, URLs, version numbers, and warnings. Only prose fluff is removed. A March 2026 paper found that brevity constraints on LLMs can improve accuracy by 26 percentage points on some benchmarks — less rambling, more precise answer.
Auto-clarity for dangerous operations
Caveman automatically suspends compression for:
- Destructive operations (DROP TABLE, rm -rf, irreversible actions)
- Security warnings
- Multi-step sequences where fragment ordering risks misread
After the clear part is done, caveman mode resumes automatically. You don't need to toggle it.
Intensity levels
| Level | Character count vs. normal | Best for |
|---|---|---|
lite |
~30% savings | Documentation, specs, design reviews |
full |
~65% savings (default) | Daily coding, debugging, Q&A |
ultra |
~80% savings | Quick lookups, log triage, you know the domain |
wenyan |
~80–85% savings | Curiosity, show off, token art |
Level persists until changed or session ends. Set once, forget.
Works across all agents
The skill was designed for Claude Code but the same SKILL.md content works in Cursor, Windsurf, Copilot, Cline, Codex, Gemini CLI, and 40+ agents via npx skills add. For agents without a hook system (most of them), caveman won't auto-start — say /caveman or "talk like caveman" each session, or paste the always-on snippet into your agent's rules file.
Claude Code: auto-activate every session
Install once and caveman activates automatically at session start via the SessionStart hook in the plugin. No need to type /caveman each time.
The benchmark numbers are real
The repo includes a reproducible eval harness (evals/) with a proper three-arm control — not just "verbose vs. caveman" but also "terse vs. caveman" — to isolate the skill's contribution from generic terseness. You can run it yourself with the Claude CLI.
Ready to try Caveman?
Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.
$npx skills add JuliusBrussee/caveman