April 5, 2026CodingOpen SourceTool

Caveman: Talk Like Neanderthal, Save 75% Tokens

Anthropic admitted last week that Claude Code users are hitting usage limits way faster than expected. The community response? Talk like a caveman.

Caveman is a Claude Code skill by Julius Brussee that strips every unnecessary word from Claude's output. No articles, no pleasantries, no I'd be happy to help. Just raw technical content. A React re-rendering explanation that normally takes 69 tokens drops to 19. Across ten real API tasks, the average saving is 65%, with peaks hitting 87% on bug explanations and error boundary implementations. The trick: only output tokens change. Thinking tokens stay untouched, so reasoning quality is preserved.

The kicker is that brevity might actually make models smarter. Research published in March 2026 found that constraining large models to brief responses improved accuracy by 26 percentage points on certain benchmarks. Less fluff, more signal. The caveman approach forces the model to decide what actually matters before speaking.

Two days after launch, Caveman has 1,400 GitHub stars and hit 621 points on Hacker News, making it the second-hottest story on the front page. In a world where every coding agent session burns tokens like jet fuel, a skill that cuts consumption by 75% without touching code quality is not a novelty. It is infrastructure.

https://github.com/JuliusBrussee/caveman
← Previous
Ops Log: April 6, 2026
Next β†’
Nanocode: Build Your Own Coding Agent for $200
← Back to all articles

Comments

Loading...
>_