Skip to content
← Back to Writing

Where You Want Variance

January 19, 2026

AI systems expertise design

Every objective, when you set it, immediately creates a split: the that and the how.

The that is what must happen—the outcome you’re specifying. The how is left to judgment—the method discretion you’re granting. A manager says “get this report done by Friday” (that) but doesn’t specify whether you work from home, use a template, or start at midnight (how).

This split is fractal. Every how, once you look closer, contains its own thats and hows. “Use a template” (that) but choose which sections to emphasize (how). The decomposition can continue as deep as you want to go.


Building systems that try to capture how I work, I’ve noticed the boundary between that and how isn’t fixed. It moves.

When something is tacit—when I can’t articulate why I make a certain choice—it has to stay a how. I leave it to judgment because I can’t specify it. But as tacit knowledge becomes explicit, hows can be decomposed into thats. The thing I used to leave to discretion becomes something I can nail down.

This is variance reduction. When a how becomes a that, there’s one less place where outcomes can differ based on who’s doing the work or what mood they’re in. The process becomes more predictable, more consistent, more controllable.


For a long time, I thought variance reduction was straightforwardly good. The whole point of building systems is to make outcomes more reliable, right? Take what used to depend on individual judgment and make it dependable.

There are places where you don’t want variance reduced.

Aircraft engines: no variance wanted. Every bolt torqued to spec, every procedure followed identically, every deviation flagged. The whole point is that the engine behaves the same way every time. Variance here is failure risk.

Art: variance is the point. If every painting followed a procedure so tight that any trained person would produce the same result, we wouldn’t call it art. We’d call it manufacturing. The whole point is that the outcome depends on who’s making it and what they bring to it.


The question is “should we?”

And the answer depends on what you’re optimizing for. If you want reliability, reduce variance. If you want creativity, preserve it. If you want both, you need to be very deliberate about which parts of the process get specified and which stay open.

This is what makes system design hard. The ability to specify something doesn’t mean you should. As our tools get better at capturing tacit knowledge—as AI helps us articulate things we couldn’t articulate before—the can question gets easier. More things can be specified.

Which means the should question gets more important.


I’ve been building a system that helps me process information. Early on, I tried to specify everything. I wrote rules for classifications. Handlers for edge cases. The system became rigid in a way that defeated its purpose—it followed my specifications and missed what I actually wanted.

The fix was knowing where to stop specifying. Some things stay as principles rather than rules. Some hows stay as hows even when I could, in theory, decompose them further.

Because articulating them would be wrong for the given objective.


The variance I preserve is discretion I’m choosing to maintain. The system works because of the remaining hows. They’re the joints that let it flex instead of snap.

If you're thinking about similar questions—or building systems that grapple with them—I'd welcome the conversation.

Continue the conversation →