The Architecture of Commitment
A hospital decides to prioritize patient safety above throughput. That's a commitment about what matters — an objective. The specific protocols for achieving safety are different: hand-washing procedures, medication verification steps, staffing ratios. These are means toward the end, and they should change as evidence accumulates about what actually works.
The distinction matters because these two types of decisions require different governance. The commitment to patient safety should be stable — revised only when the institution's values shift, which ought to be rare. The protocols should adapt constantly. Organizations that treat both the same way drift aimlessly, revising their purposes whenever the numbers dip, or calcify around outdated methods because "that's how we do things." I find these failure modes more common than they should be, and the conflation that produces them largely invisible until it's too late.
The frame that helps me think about this: organizations need architecture. Determining what bears load, what flexes, what you can't change without the whole structure failing.
The Measurement Paradox
As organizations get better at measuring things, you'd think better data would resolve more questions. It does — but not all questions. Improved measurement makes execution questions tractable: Did the marketing campaign increase sales? Did the new process reduce defects? Did the training program improve retention? With good data, these questions have answers.
Better measurement doesn't resolve whether you're measuring the right things in the first place. A hospital can measure patient throughput with extraordinary precision. That precision doesn't tell you whether throughput should be prioritized over patient experience, or whether either should matter more than staff wellbeing.
Consider how this plays out concretely. Thirty years ago, most organizations couldn't measure their carbon footprint with any reliability. The question "are we reducing emissions effectively?" and the question "should we prioritize emissions reduction at all?" were bundled together in general uncertainty — when measurement is imprecise, the imprecision obscures both problems simultaneously. Now, measurement technology has separated them. We can track emissions precisely, evaluate reduction strategies rigorously, assign credit or blame for outcomes. But which objective matters more when emissions reduction conflicts with shareholder returns, employee wages, or product affordability? The data doesn't answer that. The precision reveals the choice rather than making it.
What this adds up to: as our ability to track outcomes improves, "are we doing things right?" becomes answerable, and "are we doing the right things?" becomes more exposed as the contested domain that data can't settle. Better data doesn't resolve what to optimize for. It exposes the choice of what to optimize as the question requiring human judgment. I'd go further: I think this is the most important thing that's happened to strategy in the last thirty years, and it's almost never framed this way.
How Objectives Actually Stabilize
If objectives can't be validated through measurement, what holds them in place? Why don't organizations constantly renegotiate their purposes?
The answer involves structure. In any organization, some positions influence others more than they're influenced in return. A CEO's stated priorities shape what employees attend to more than employee preferences shape the CEO's priorities. Their position creates a stable reference point that enables coordinated action — not because they have superior knowledge of what the organization should do, but because their stability gives everyone else a direction to orient toward.
Think of it this way: when everyone in a room can influence everyone else equally, beliefs converge toward averages that no one firmly holds. Coordinated action becomes difficult because there's no clear focal point. When some positions are structurally insulated from continuous influence by others, their commitments remain clear enough to give everyone else direction.
What the leader provides is stability. That stability enables others to explore different approaches to achieving the leader's objectives, and that exploration eventually reveals whether those objectives were productive. If outcomes are persistently poor despite good execution, the leader learns from results — evidence accumulates over time, and the objectives get reconsidered when the evidence warrants it. This is, I think, a genuinely underappreciated argument for why centralized authority in organizations can be epistemically valuable even when the center doesn't have better information.
Ends and Means
Some objectives specify what the organization is trying to achieve — end-oriented. Others specify how — means-oriented. The distinction determines how objectives should be governed.
End-oriented objectives express values: "We exist to improve human health." "Our purpose is delivering value to shareholders." "We prioritize sustainable practices." These define what success means — the frame within which measurement becomes meaningful. They're evaluative commitments.
Means-oriented objectives express methods: "We'll reduce costs through process automation." "We'll increase revenue through geographic expansion." "We'll improve safety through standardized protocols." These can be tested against outcomes. If automation doesn't reduce costs, that's evidence against the method. The underlying commitment to efficiency stands; the approach changes.
The governance implication follows directly: means-oriented objectives should adapt when evidence suggests they're not working; end-oriented objectives should revise only when the values they express no longer represent what stakeholders care about. What happens in practice is often the reverse. Organizations revise their purposes in response to short-term performance pressures while defending outdated methods because "that's how we've always done it." When I look at organizations struggling to adapt, I find this inversion more often than any other single failure.
Three Levels of Architecture
Let me try to be more precise about what architecture means here. Organizations seem to operate at three levels, each with different revision mechanisms — though I'd present these as observations from cases rather than a clean taxonomy.
Constitutional commitments settle foundational questions: Who counts as a stakeholder? Whose interests deserve consideration? What can and can't be traded off? A pharmaceutical company that decides patients' interests take precedence over shareholders' when the two conflict has made a constitutional commitment. So has one that decides the opposite. Neither decision can be validated by performance metrics. Both are evaluative settlements that shape everything downstream.
Institutional priorities translate constitutional commitments into action: Given that patients matter, which patient interests are most salient? Safety? Access? Affordability? Privacy? Constitutional commitment to patients doesn't specify how to weigh these competing patient interests. That's institutional work — more revisable than constitutional commitments but more stable than day-to-day operations.
Operational methods determine execution: Given that patient safety is the priority, what protocols achieve it? This is where measurement is most useful and where adaptation should be most continuous. Run experiments, track outcomes, revise when the data shifts.
The architecture determines what kind of feedback revises what kind of objective. Poor quarterly results should trigger operational review: are our methods effective? Persistent poor results despite operational adjustment should trigger institutional review: are our priorities right? Systematic failure to serve stakeholders should trigger constitutional review: do our foundational commitments still reflect what we and our stakeholders value? In practice, I see organizations skip straight from operational disappointment to constitutional crisis. The institutional layer — the intervening one — is where most of the real diagnostic work happens.
Two Failure Modes
Organizations that haven't explicitly distinguished these levels tend toward drift or rigidity — and often alternate between them.
Drift: Without constitutional commitments shielded from ordinary performance feedback, organizations migrate toward whatever their measurement systems make visible. This is the drift I trace in The Right Direction: measurement systems quietly replacing the objectives they were designed to track. Shareholder value became the de facto purpose of many corporations not through explicit commitment but through the drift of attention toward what was most measurable and reportable. No single decision anyone would point to.
Rigidity: Without operational premises that adapt to evidence, organizations calcify around methods that worked historically but no longer serve current conditions. The commitment to "how we do things" gets confused with the commitment to "what we're trying to achieve." Adaptation feels like betrayal of identity when means and ends haven't been separated architecturally.
Good architecture prevents both by specifying what's stable, what's adaptive, and what mechanisms govern transition between them. Strategic choice then operates on the architecture itself: determining which objectives belong at which level, what triggers revision at each level, and how the organization maintains the capacity to revise its architecture when even its constitutional commitments no longer serve the aims that brought stakeholders together.
Is this architecture something organizations can design explicitly? Or does it emerge implicitly through political contestation and power dynamics that no designer controls? Whether this is something that can be designed remains genuinely open. What seems true is that organizations with clear architectural separation perform better than those without — either they've designed it or they've evolved it through selection pressures that punished confusion. Whether the architecture can be deliberately constructed, or whether attempts to design it just become another layer of calcification, remains an open question.
Published January 2026
If your organization is grappling with these questions, I'd welcome the conversation.
Continue the conversation →