Stolen Valor
How physics lent its credibility to people who didn’t earn it, and what the 20th century looked like as a result
McNamara counted bodies and lost a war. Your hospital counts throughput and kills patients. Your university counts satisfaction scores and destroys education. These are not separate management failures. They’re one failure, running for a century, in every institution that adopted the method. New post on the epistemological error that made all of it not just possible but inevitable.
The prestige was real. That’s what made the theft work.
Germ theory. Thermodynamics. Electromagnetism. Early quantum mechanics. These were not incremental improvements on received wisdom. They were complete revisions of how the world worked, confirmed by experiment, with predictive accuracy that left the prior frameworks looking embarrassing. By 1900, physics had earned an authority claim that no other mode of human inquiry had ever achieved. You could disagree with a physicist, but you were going to need data.
The error was not claiming that authority. The error was treating it as transferable.
The Three Conditions
Physics works because three things are true about its domain. You can isolate variables. You can run controlled experiments. The system you’re studying doesn’t change its behavior because you’re studying it.
Atoms don’t update on being theorized about. People do.
Every attempt to apply physics-style methods to human social systems fails at all three conditions simultaneously. You can’t isolate the relevant variables in a human system because the relevant variables are everything, and everything interacts. You can’t run controlled experiments at meaningful scale without doing something that’s either unethical or politically impossible. And you cannot observe a human system without changing it, because people read the theories about them, revise their self-understanding accordingly, and behave differently. The observer effect in physics is a quantum phenomenon at very small scales. In human systems it’s total and operates at every level. A village that gets studied by anthropologists is not the same village that existed before the anthropologists arrived.
These weren’t obscure technical limits. Dilthey said it. Weber said it. The neo-Kantians spent decades articulating why the methods of natural science couldn’t be simply transplanted to human inquiry and what a genuinely human science would have to look like instead. They were right. They lost anyway. Not because they were refuted. Because they were less confident and had worse politics than the scientizers, and confident wrong conclusions beat humble correct ones in the short run almost every time.
The 20th century was what the long run looked like.
Scientism and Its Worse Twin
Scientism is claiming scientific authority for non-scientific claims. It’s annoying but the word exists and people know what it means.
Scientody is the thing that compounded it. Cargo-cult methodology. Going through the forms of science without the substance. Collecting data without asking whether it’s the right data. Running statistics without asking whether the model is correctly specified. Peer reviewing within a professional community whose shared ideological commitments are exactly what the review process can’t catch. Scientism provided the authority claim. Scientody provided the appearance of rigor. Together they produced confident wrong conclusions with institutional backing, which is a combination that can run for a very long time before anything interrupts it.
Freud called his project a natural science. Explicitly. He was not embarrassed about this. He had a geological metaphor for the psyche (layers), a hydraulic metaphor for libido (pressure, discharge, damming), and a therapeutic method that could confirm its own theory regardless of outcome. If you agreed with the interpretation, that was confirmation. If you resisted it, that was resistance, which was also confirmation. The theory specified no condition under which it could be false. This is not a minor methodological problem. It is the methodological problem, the one thing a scientific theory must not do, and Freud built it in structurally and called the result a science. The Vienna Medical Society thought this was fine. The broad educated public thought this was fine. Being the person in the room who noticed the problem didn’t make you a rigorous methodologist. It made you a hysteric in denial.
The Self-Sealing Machine
Real science specifies in advance what would count as disconfirmation. This is not optional. It’s the mechanism. Without it you can’t distinguish a confirmed theory from an unfalsifiable one, and unfalsifiable theories are exactly as useful as astrology, which is to say they’re useful for producing confident-sounding conclusions with no accountability for being wrong.
The scientized human sciences never specified disconfirmation conditions because doing so would expose the theories to standards they couldn’t meet. The result was that every wrong conclusion became confirmation of some kind. Bad outcome? Implementation failure. Persistent bad outcomes? The problem is harder than we thought. Give us more resources and more authority to study it more carefully and we’ll get it right next time.
The experts never lose authority by being wrong. Being wrong is always a reason to give them more.
This is the mechanism. Not arrogance. Not bad faith (usually). A structural feature of applying an authority claim to a domain where the self-correction mechanism can’t operate. Science is supposed to self-correct through the very process that makes it science: experiments that can fail, predictions that can be wrong, theories that lose authority when the data doesn’t cooperate. Strip those out, keep the authority claim, and you have an institution that is immune to its own failures by design. You can run that for a long time. The 20th century ran it for a very long time.
McNamara Counted Bodies
Managerialism is the version of this that didn’t produce a visible catastrophe and therefore never updated.
Frederick Taylor said factory work could be scientifically optimized. He was right, within narrow conditions, for repetitive physical tasks where the variables are actually isolable and the workers can’t revise their behavior in response to being measured in ways that defeat the measurement. True in his domain. The transfer happened immediately: if factory floor work can be optimized scientifically, so can the whole organization. If the whole organization, then any organization. If any organization, then every domain of human activity. The MBA credentialed the transfer. McKinsey and BCG and Bain made it portable. Peter Drucker gave it a self-conscious identity and a vocabulary. The consulting industry sold it to domains where the epistemological conditions failed even more completely than in factories, and charged more for each successive failure, because the failure was always attributed to the wrong metric rather than the method, which meant the solution was always better metrics, which meant more consulting.
This is not a coincidence. This is the business model.
The metric substitution is managerialism’s core operation and it’s pure scientody. Find something measurable. Declare it a proxy for something that matters. Optimize the proxy. Goodhart’s Law, formally stated in 1975 but observable long before: when a measure becomes a target, it ceases to be a good measure. The thing that mattered gets destroyed by the optimization. The failure gets attributed to the wrong metric. New metric. Same operation. The method never fails. The metrics just need refinement.
Robert McNamara counted bodies and lost a war. This is not a complicated case. He substituted a measurable proxy (enemy killed) for an actual objective (pacification, political control, end of conflict). The system he was measuring changed its behavior in response to being measured: the Vietnamese counted differently than the model assumed, the American military optimized for the number, the number went up, the war was lost. The observer effect in human systems, stated as a historical fact with a body count attached.
This should have ended managerialism. It didn’t. It produced more sophisticated metrics and more confident management consultants.
Your hospital runs throughput metrics. Patients wait in hallways because the metric is beds occupied, not patients treated effectively, and “effectively” isn’t measurable in a way that fits a quarterly dashboard. Your university runs student satisfaction scores. The faculty who give easy grades and validate every opinion get high scores. The faculty who assign difficult readings and tell students their arguments are weak get complaints. The institution responds to the incentive. Your police department runs arrest metrics. The officers optimize for arrests. You get exactly the enforcement pattern the metric produces, which is not the enforcement pattern that reduces crime, because crime reduction isn’t measurable in a way that maps onto arrest counts. Every one of these is the same failure. Every one of them will be attributed to the wrong metric, and the solution will be a new metric, and we’ll be back here again.
The fish doesn’t know it’s in water. The fish has an MBA.
What Dilthey Was Trying to Tell You
The people who correctly identified the problem in 1900 didn’t win. They produced hermeneutics and interpretive sociology, which are honest about their limits and also genuinely less useful than the scientized versions at producing actionable conclusions in the short run. “I can give you a rich interpretive account of this social phenomenon but I can’t tell you what to do about it” loses the budget battle to “I have a rigorous methodology that tells you exactly what to do” every time, even when the rigorous methodology is cargo cult science and the rich interpretive account is actually correct.
This is the confidence asymmetry. It’s not that scientism won because it was persuasive. It won because it was useful, specifically useful for people who needed confident conclusions to justify authority claims, allocate resources, and make governance decisions. Uncertainty is not a viable political product. “We don’t know and the situation is inherently resistant to the kind of knowledge you’re asking for” doesn’t get you the regulatory agency. Confident wrong conclusions do.
The stolen valor didn’t just give the scientizers unearned authority. It gave them the ability to keep the authority while being wrong indefinitely. Science is supposed to be the method that loses authority when it fails. Scientism is the method that gains authority when it fails, because failure always means more resources for more science. The self-correction mechanism was the thing that got stolen. The prestige was just how the theft looked from the outside.
Dilthey died in 1911. He never saw the full run of what he was trying to prevent. The people who came after him watched it happen and wrote careful methodological critiques that nobody in power read. The 20th century was the short run. We’re still in it.


A similar process happens with policy applications of socialist ideologies. The policies’ inevitable failures are reframed as the result of a lack of resources or of understanding and cooperation from the wider society. The failures are used as justification for asking for a bigger budget for expansion of the program and re-education of the public. It’s all unfalsifiable because the ideology underneath must not be questioned; it is assumed as a given.