What Rules Can't Capture
The more rules I wrote, the less the system understood what I actually wanted. Some knowledge resists becoming rules - and that's where the interesting work begins.
Long-form essays on how organizations work, and shorter notes on AI, cognition, and what I'm learning along the way.
The more rules I wrote, the less the system understood what I actually wanted. Some knowledge resists becoming rules - and that's where the interesting work begins.
Markets coordinate brilliantly within frameworks they cannot generate. Price signals tell you how to win - they can't tell you whether it's the right game.
Strategic choice lives in a narrow temporal zone between market degeneration and survival pressure. The mismatch between three clocks is where strategy happens.
As measurement improves, execution becomes tractable. What remains contested is what the organization should be pursuing in the first place.
As we get better at specifying things, the question shifts from 'can we reduce variance?' to 'should we?'
What if the machines don't take our jobs but free us from the parts of work that were never meant for humans anyway?
What happens when we realize that governing AI agents requires the same wisdom that political philosophers developed for governing people.
On discovering that machines can be peers, and on the division of cognitive labor that makes partnership possible.
On the strange process of extracting what you know but cannot say, and why it requires watching more than asking.
On discovering that a single perspective—even a smart one—is never enough to see what you're actually thinking.