Skip to content
← Back to Writing

Not Enough Problems

February 18, 2026

AI economics strategy coordination

Solutions are getting cheaper. Problems are getting scarce. The world has plenty of conditions worth addressing, and the work of turning those conditions into actionable questions proceeds more slowly than the work of answering them.

In 2016, Washington State put Initiative 732 on the ballot — a revenue-neutral carbon tax designed to be as economically rational as climate policy gets. Cut the sales tax, offset it with a price on carbon. The science was undisputed. The economics were broadly agreed upon. The initiative lost 59-41.

The coalition that killed it came from within the climate movement. Environmental justice organizations and clean energy alliances organized against the measure over what the revenue would mean. They wanted carbon revenues invested in frontline communities rather than returned as broad-based tax cuts. The two sides agreed on the mechanism — put a price on carbon — and couldn’t resolve what fairness required. No data settles that. No model adjudicates between distributive justice and aggregate welfare. The framing work necessary to organize a coalition around shared values hadn’t happened, and a policy everyone agreed was technically sound went down.

A problem is an interpretation. It’s a claim that some gap between what is and what ought to be deserves organized effort. Loneliness is a problem only if you’ve committed to the value of human connection. Soil degradation is a problem only if you hold commitments about intergenerational responsibility. Each requires an evaluative commitment before it becomes something effort can organize around — and each opens a cascade of downstream questions that AI could help address, once someone has done the prior work of framing.

Hayek’s great insight was that markets coordinate factual questions without requiring participants to understand the whole picture. A shipper doesn’t need to know why tin prices rose; the price signal tells her to economize. AI extends this capacity enormously. Code that took teams weeks, analysis that demanded expensive expertise — domain after domain, the constraints that justified elaborate organizational structures are dissolving.

His formulation contains an unexamined premise. The price system enables actors to take “the right action,” and what determines right? Prices coordinate effort toward objectives someone has already identified as worth pursuing. AI inherits this limitation. It can execute on any well-framed problem. It can’t tell you which problems are worth framing.

Simon named the first constraint: bounded rationality. Given a clear objective, you can’t compute all paths to it. We built centuries of infrastructure around this — universities, laboratories, scientific methods. The constraint on the question side has no name and almost no infrastructure. Call it bounded articulation: given a world full of conditions, you can’t frame all of them into actionable problems. Ruskin put it a century and a half ago: thousands can think for one who can see.

Bounded rationality constrains a convergent process — better and worse paths to a given end. Bounded articulation constrains a divergent one — no correct framing to converge toward, only values to commit to. AI relaxes the first. It can’t touch the second.

And as bounded rationality loosens, evaluative questions proliferate. We can sequence a genome for under a thousand dollars; that exposes questions about genetic privacy and enhancement that no sequencing machine adjudicates. We can build systems that diagnose medical images and draft legal arguments; those capabilities expose what professionals are for once their knowledge advantage dissolves. Better answers don’t settle which questions matter. They reveal how many unsettled values were hiding behind the difficulty of answering.

The displacement scenarios investors model assume the question side never expands — a fixed set of problems, solutions getting cheaper, contraction. The logic holds if the problem set is static. Washington is the template: the I-732 coalition left the evaluative question unresolved, which meant the next technically sound proposal faced the same obstacle. Multiply that across every domain where good answers exist without settled questions to point them at. When questions of direction go unaddressed, what becomes measurable becomes salient, what becomes salient becomes legitimate, what becomes legitimate becomes binding. The visible displaces the valuable.

Every firm’s value comes from two things: the problem it chose and how well it executes. AI is compressing the second term — everyone has the same foundation models, the same tools. What remains is the problem you chose. Two firms with identical capabilities pointed at different problems produce wildly different outcomes, and the divergence was determined before any computation began.

The same logic scales. “We should care about aging” is a sentiment. “Dignity in aging means someone over seventy can walk to a grocery store, and we’re willing to pay for the infrastructure” is a problem — one that generates answerable questions about measurement, intervention design, workforce development, scaling. Every problem framed is a new source of work. The narrow path out of the displacement scenarios runs upstream.

I’m not confident we have the institutional infrastructure for this work at the scale our execution capacity now operates. The machines can think. What they can’t do is see — because seeing requires that something matter to the one who looks. A condition becomes a problem only when someone decides it is unacceptable, and that decision has no computational form. The economy is producing optimizers at unprecedented scale. What it lacks is seers.

If you're thinking about similar questions—or building systems that grapple with them—I'd welcome the conversation.

Continue the conversation →