Because It’s Wrong
The Personality That Institutions Cannot Process
There is a personality type that every institution eventually learns to fear. Not the whistleblower. Not the activist. Not the dissident. Those are legible threats. They want something. They have an agenda. They can be categorized, managed, countered, discredited.
The personality the institution cannot process is the person who corrects errors because they are errors.
Not because the correction serves their interests. Not because they are aligned against the people who made the error. Not because they are building a case or advancing a cause or positioning themselves for advantage. Because the error exists. Because it is wrong. Because someone published a number that is not the right number, and the wrong number is sitting there, propagating, being cited, being absorbed, being built upon, and nobody is fixing it.
This person will spend three hours writing a detailed correction of a statistical claim in a policy document that has nothing to do with their field, their career, their politics, or their life. They will do this for free. They will do it knowing that the correction will make them no friends and several enemies. They will do it on a Saturday. They will do it again the following Tuesday when they find another error in a different document.
If you ask them why, the answer is: because it’s wrong.
That answer is incomprehensible to most institutional actors. And the incomprehension is the beginning of the immune response.
In any social environment where statements are understood as moves (corporate, political, academic, activist, media), every public assertion is assumed to be strategic. You say things to advance your position, to signal your affiliation, to attack your opponents, to build your brand. Communication is a game. Every utterance is a play. The question is never just “what did he say.” The question is “what is he doing by saying it.”
In this framework, error correction is always an attack. If you correct a number in a report, you are attacking the person who published the report. If you correct a claim in a policy document, you are attacking the policy. If you correct a statistic in an advocacy campaign, you are attacking the campaign. The correction cannot be received as information. It can only be received as aggression, because the framework has no category for “disinterested provision of accurate information.” Every statement has a motive. Therefore the correction has a motive. The only question is what the motive is.
And so the response comes: “Why are you bringing this up? Why do you care about this? What’s your angle? Who are you working for? Why is this so important to you?”
These are not questions. They are classification attempts. The institution needs to put the corrector in a box so it knows which antibodies to deploy. If the corrector is from the opposing political faction, deploy the partisan antibodies. If the corrector is from a rival organization, deploy the competitive antibodies. If the corrector is a disgruntled former employee, deploy the credibility antibodies. Each box has a pre-loaded response kit. The institution is efficient at managing categorized threats.
The error-corrector doesn’t fit in a box. They have no faction. They have no agenda. They have no stake in the outcome. They saw a wrong number and they can’t leave it alone. The institution’s classification system returns null. And a null result from the classification system is more threatening than any categorized threat, because the institution doesn’t know what to do with it.
So the institution escalates.
The escalation follows a predictable sequence.
Stage one: motive interrogation. “Why are you bringing this up? What’s your real concern here? Who benefits from this correction?” The corrector gives the honest answer: “Nobody benefits. The number is wrong. Here is the right number.” The honest answer is interpreted as evasion, because the framework cannot process the idea that a person would act without a strategic motive.
Stage two: tone policing. “Your correction may have merit, but the way you raised it was not constructive. You should have gone through channels. You should have raised it privately. You should have framed it more diplomatically.” The corrector, who often lacks the social circuitry that makes diplomatic framing intuitive, takes this feedback seriously and tries to comply. The compliance doesn’t help, because the objection was never about tone. It was about the correction’s existence.
Stage three: pathologization. “This person is obsessive. They’re fixated on minor details. They can’t see the big picture. They have a pattern of negativity. They’re not a team player. They’re not collaborative. They have difficulty with social norms.” Notice what has happened: the institutional immune system has converted a factual claim about the world (”this number is wrong”) into a clinical claim about the person (”something is wrong with this person”). The error has been transferred from the document to the corrector. The document is fine. The person is the problem.
Stage four: exclusion. The corrector is gradually removed from the information flow. Not fired (that creates a record). Not formally disciplined (that creates a grievance). Marginalized. Not invited to the next meeting. Not cc’d on the next email. Not included in the next project. Slowly, invisibly, moved to the edge of the community until they either stop correcting or leave.
The error remains in the document. Nobody fixes it. The institution has successfully defended itself against accurate information.
The institution has successfully defended itself against accurate information.
This sequence is so common across so many institutional contexts that it should be recognized as a standard institutional immune response, as routine and as mechanical as an allergic reaction. The institution is not making a conscious decision to suppress truth. It is doing what immune systems do: identifying a foreign object and attacking it. The foreign object is not the error. The foreign object is the person who won’t stop pointing at the error.
The reason the error-corrector triggers the immune response is not that the correction is wrong. It is that the correction is unmanageable. A wrong correction can be rebutted and everyone moves on. A right correction that comes from an adversary can be contextualized: “of course they’d say that, they’re trying to undermine us.” A right correction that comes from a person with no adversarial motive cannot be rebutted or contextualized. It can only be processed on the merits. And processing on the merits might require admitting the error, which might require changing the document, which might require changing the policy, which might require changing the budget, which might require someone admitting they were wrong, which is the one thing institutional immune systems exist to prevent.
The immune response is proportional to the institution’s inability to categorize the threat, not proportional to the actual threat. The error-corrector is the most benign actor in the system (they literally just want the number to be right) and they receive the most aggressive institutional response (because the system cannot process a motiveless correction).
This is why error-correctors reliably report the same baffling experience across every institutional context: “I pointed out a factual error. I provided the correct information. I had no agenda. And somehow I became the problem.” They are not wrong about this experience. The institutional immune system really is treating them as the threat. The system is working as designed. It is designed to maintain stability, not accuracy. An uncorrected error that threatens no one’s position is stable. A corrected error that might require someone to act is unstable. The immune system defends stability. The error-corrector threatens stability. The error doesn’t.
The error-corrector personality has a specific cognitive profile that is worth describing precisely, because the precision explains why the motive trap fails on them and why institutions find them so confusing.
For most people, an error in a document they encounter is a piece of information that is processed, evaluated for relevance to their interests, and filed accordingly. If the error is not relevant to their interests, it is ignored. If the error is relevant, it is addressed through whatever action serves their interests. The error is an input to a strategic calculation. The response is proportional to the stakes.
For the error-corrector, the error itself generates a signal that does not attenuate with irrelevance. The error is wrong. It is going to keep being wrong. People are going to read it and absorb the wrong thing. The wrongness is going to propagate. This is not a strategic calculation. It is closer to a perceptual experience: the error is visible and it stays visible and it does not move to the background. It sits in the foreground of attention the way a crooked picture frame sits in the foreground of attention for a person with a particular kind of aesthetic sensitivity. You can choose to ignore it. The choosing costs effort. The effort is continuous. The error just sits there, being wrong, demanding correction.
This is why asking the error-corrector “why is this so important to you” is so disorienting to them. The honest answer is: “It’s not important to me. It’s not about me. The number is wrong. That’s the whole thing.” But that answer sounds evasive to the questioner, because the questioner’s framework assumes that effort implies importance and importance implies motive. The error-corrector is expending significant effort on something they claim isn’t important to them personally. That pattern doesn’t match. So the questioner assumes the stated explanation is a cover for the real explanation, and the motive interrogation intensifies.
The error-corrector is not being evasive. The error-corrector is describing an experience that the questioner’s framework genuinely cannot model: the experience of finding inaction in the presence of a known error more costly (in terms of cognitive discomfort) than the social cost of correcting it. The correction is not motivated by what the corrector gets from correcting. It is motivated by what the corrector experiences by not correcting. The motive is relief from the presence of uncorrected error, not pursuit of any external reward.
This is an alien motive in any social-positioning framework. It doesn’t map to ambition, to rivalry, to ideology, to loyalty, or to any of the standard categories that institutions use to interpret behavior. It is genuinely disinterested in the technical sense: the corrector has no interest in the outcome beyond accuracy. And genuine disinterest is the one thing the motive-trap framework cannot process, because the framework’s foundational assumption is that disinterest doesn’t exist.
The error-corrector personality is overrepresented in certain populations and certain professions, and the overrepresentation is not random. It is structural.
Engineering selects for error-correction because engineering errors kill people. A structural engineer who can “note and move on” from a load calculation error is a structural engineer who will eventually kill someone. The profession selects for the personality that cannot leave the error uncorrected, because the profession’s function depends on it. The same is true of aviation, medicine, accounting, and any other field where errors compound into catastrophe. These fields develop cultures of error-correction not because the people in them are obsessive but because the fields cannot function without obsessive error-correction.
Autism correlates with the error-correction personality because the cognitive profile (strong pattern detection, high sensitivity to inconsistency, reduced sensitivity to social cost) is part of the autistic phenotype. Not all autistic people are error-correctors. Not all error-correctors are autistic. But the correlation is strong enough that the institutional immune response to error-correctors has a significant overlap with institutional mistreatment of autistic employees, and this overlap is almost never discussed because the motive-trap framework interprets the autistic employee’s corrections as social deficiency rather than as functional contribution.
The early internet and open source software culture was disproportionately built by error-correctors, because the early internet was an environment where error-correction was the primary social currency. You posted something wrong on Usenet and someone corrected you and the correction was the contribution. The culture rewarded accuracy. The culture did not reward diplomatic framing, strategic positioning, or sensitivity to the social consequences of being right. That culture has been progressively replaced by social-positioning culture as the internet has grown, and the error-correctors who built the original infrastructure have been progressively marginalized by the institutional immune responses of the organizations that grew on top of it.
The interesting question is not why institutions attack error-correctors. That’s straightforward: institutions optimize for stability, error-correction threatens stability, immune systems attack threats. The interesting question is what happens to a society that systematically drives out the people who correct errors.
The answer is: the errors accumulate.
Not dramatically. Not catastrophically (at first). Quietly. One uncorrected number in a policy document. One unchallenged assumption in a strategic plan. One unquestioned metric in a performance review. Each individual error is small. Each individual immune response is locally rational (why fight about this number when the political cost of the fight exceeds the cost of the error?). The accumulation is invisible because each error is invisible and the people who would have made it visible have been excluded.
Over time, the institution’s model of reality drifts from reality. The drift is undetectable from inside because the people who would have detected it have been driven out or have learned to stay quiet. The metrics look fine because the metrics are the ones the institution chose to measure, and the institution chose to measure things it was succeeding at. The reports are positive because the people writing the reports know what happens to people who write negative reports. The strategy is on track because the strategy is evaluated against the goals the strategy was designed to achieve, and the goals were designed to be achievable.
And then the institution encounters a reality it didn’t model. A market shift it didn’t see. A technology change it didn’t anticipate. A competitor it didn’t take seriously. A crisis it wasn’t prepared for. The institution is surprised. Everyone is surprised. How did this happen? We had all the data. We had all the reports. We had all the metrics.
You had all the metrics except the ones the error-correctors would have given you. You had all the reports except the ones the error-correctors would have written. You had all the data except the data the error-correctors were trying to show you when you asked them why they were so focused on it.
The exclusion of error-correctors is an institutional choice to trade long-term accuracy for short-term stability. It is a rational choice in any individual instance (the political cost of the correction exceeds the cost of the error in this specific case). It is a catastrophic choice in aggregate (the accumulated errors eventually exceed the institution’s capacity to absorb surprise).
Every institutional failure that is described after the fact as “nobody could have predicted this” can be re-examined with a simple question: was there an error-corrector who tried to point this out, and what happened to them?
The answer, in an uncomfortable number of cases, is: yes, and they were asked why it was so important to them, and they were told they were not being constructive, and they were moved to the edge of the organization, and they eventually left, and the error they were pointing at is the error that just detonated.
The error-corrector does not need vindication. Vindication is a social reward and the error-corrector is not operating in the social reward framework. The error-corrector needs the error to be fixed. If the error is fixed, the corrector is satisfied regardless of whether they receive credit, acknowledgment, or apology. If the error is not fixed, the corrector is dissatisfied regardless of how much social validation they receive.
This is why the error-corrector is the natural enemy of every institutional immune system and the natural ally of every institution that actually wants to function. The same personality that is most threatening to institutional stability is most essential to institutional accuracy. The institution that figures out how to tolerate the error-corrector (not celebrate, not promote, not center, just tolerate: let them point at the error without triggering the immune response) is the institution that maintains contact with reality. The institution that drives them out is the institution that optimizes for comfort until comfort kills it.
The advice for error-correctors is not “learn to be more diplomatic” (though some diplomacy helps at the margin). The advice is: understand what you are, understand what you trigger, understand that the immune response is not about you, and keep correcting errors anyway. The errors matter more than the social cost. You know this. That’s why you’re still doing it after all the times it’s gone badly. The fact that it keeps going badly is not evidence that you’re doing it wrong. It’s evidence that the institutions are defending against accuracy, and accuracy is what you provide, and the defense is what the institution does, and neither of you is going to change.
The advice for institutions is simpler and harder: when someone points at an error and you feel the urge to ask why they care, stop. Look at the error instead. Check whether it’s wrong. If it’s wrong, fix it. Thank the person who pointed at it. Go back to work.
That sequence (look, check, fix, thank) is so simple that a child can do it. It is so threatening to institutional stability that most organizations cannot do it even once. The gap between the simplicity and the difficulty is the gap between what institutions say they value (accuracy, truth, excellence) and what they actually value (stability, comfort, the absence of conflict).
That sequence (look, check, fix, thank) is so simple that a child can do it.
It is so threatening to institutional stability that most organizations cannot do it even once.
The error-corrector lives in that gap. It’s not a comfortable place. But someone has to live there, because the errors don’t fix themselves, and the institutions that let the errors accumulate eventually discover that reality doesn’t care about stability.
Reality cares about accuracy.
The error-corrector has been trying to tell you this.
The question was never “why is this so important to you.”
The question was always “is it wrong.”


Another terrific post!
If you want to see an error corrector in action check out DataRepublican on Substack or @DataRepublican on X. (That’s a small “r” not a party affiliation. As befits an error corrector. )
It’s not just institutions. The vast majority of people don’t understand error correctors. They frame the situation as involving someone who “always thinks they are right.”* There are such people, of course, but a bunch of us just want to GET things right. We are the people who go back and put the apostrophe on “dont”. In addition to the autistics, there’s an overlap with OCD.
* If I didn’t think I was right, why would I say it?
This excellent piece would have been stronger had it included one or more real-world examples.