The Warm Body Problem
You were hired to get blamed for things. Not to stop them.
The “human in the loop” AI job isn’t an oversight role. It’s a liability seat. The people sitting in it mostly don’t know that’s what they were hired to be. Here’s the architecture.
The prediction that AI will create a new class of liability-absorbing workers has been circulating long enough that it’s become a bit of a knowing joke in certain circles. The joke is that we’ll manufacture a layer of humans whose nominal function is oversight and whose actual function is to provide institutional cover when something goes wrong.
The joke is behind the timeline. Those jobs exist. People are sitting in them right now.
The tragedy isn’t the structure. The tragedy is that the occupants don’t know what they signed up for.
There’s a known professional role called the spear catcher. It’s not a formal title. It refers to a senior executive, usually brought in laterally, who absorbs institutional blame for a decision that was made above them, before them, or around them. The role is real, it has known economics, and the professionals who do it repeatedly understand the implicit contract: you take the hit, you get paid the premium, and the industry reads your eventual firing as ritual rather than judgment. You negotiate the exit before you accept the offer. You have lawyers. You have relationships that survive the firing because everyone above a certain altitude understands what just happened.
The AI oversight role is structurally identical to the spear catcher role. Same function: liability absorption. Different packaging: quality assurance, compliance, human-in-the-loop oversight, responsible AI review. Different workforce: contractors, offshore QA, recent graduates, career-changers from fields being automated who are grateful for adjacent employment.
The liability premium has been removed from the comp while the liability remains.
That’s the core injury. Not that the role exists. That it’s being sold at the wrong price to people who don’t know what they’re selling.
The EU AI Act mandates human oversight for high-risk AI systems. It does not define what makes oversight effective. What it requires is a named human, a review log, and a timestamp. The regulator gets a compliance framework. The company gets a documented narrative. The public gets a warm body.
Nobody in that arrangement benefits from the human actually stopping anything.
The interface design encodes this. In every real AI review system described or documented, approval is one click. Rejection requires documentation, escalation, and justification. The friction is not accidental. When you design a review interface where one path is frictionless and the other path generates a paper trail that requires supervisor sign-off, you have expressed a preference. You have built a machine for generating approvals with a documented human in the chain.
The volume makes the rest moot anyway. Radiologists reviewing AI-flagged scans are being asked to provide genuine second reads on case loads that exceed what genuine second reads would require. The signature is on the output. The review is not. Automated lending systems route decisions through human reviewers at volumes that make per-case analysis a physical impossibility. Fair lending law gets its named human. The borrower gets a checkbox.
This is not a workflow problem that better tooling will solve. The volume is structural. The system processes faster than humans can evaluate. That is the product. The human is not meant to slow it down. The human is meant to be there when something goes wrong.
What the professional spear catcher has that the AI oversight workforce doesn’t:
Leverage. The professional spear catcher is senior enough that their formal documentation of an impossible situation creates institutional risk. When a senior executive puts in writing that they cannot perform their nominal function due to volume or system design, that document is a liability. The institution has reason to fix the problem or negotiate the exit rather than simply let the documentation accumulate.
The junior contractor who formally documents that their review queue makes genuine oversight impossible is not creating institutional risk. They’re creating a performance improvement plan.
The professional spear catcher has industry relationships that survive a ritual firing. The read from peers is: she took the hit, she’s fine, we’ll work with her again. The AI content reviewer, the AI output auditor, the responsible AI compliance specialist — my read is that none of these roles has an established industry norm for what a high-profile failure means on a resume. There’s no ritual firing yet because there hasn’t been a sufficiently public casualty to establish the script. That’s coming.
The professional spear catcher has lawyers. The AI oversight workforce has, in many cases, no employment protections at all. Content moderation and AI review work is heavily contracted to workers in the Philippines, Kenya, and India who have essentially zero access to the legal remedies that make “document everything and keep copies outside company systems” actionable advice. For those workers, defection is impossible. The liability architecture works even more cleanly on them, which is presumably why the work is there.
The counterargument is that some human oversight is better than none. That mandating a named human, even imperfectly, creates a pressure point that can be litigated, regulated, and improved over time. That the alternative — pure institutional liability with no named individual — is worse for accountability.
This is a real argument. The response is:
The principle that someone is responsible only functions if the someone has actual capacity to exercise responsibility. If the volume makes genuine review impossible, if the interface design makes approval frictionless and rejection costly, if the employment relationship makes formal documentation of impossible conditions career-ending, then you haven’t established accountability. You’ve established a scapegoat pipeline.
The pipeline will produce a series of individual casualties. The underlying system will continue unchanged. That is also regulatory capture, just operating on individuals rather than agencies.
The institution is not stupid. The workforce selection is not an accident.
A senior person with leverage would ask the questions before signing. A senior person with leverage would negotiate the exit terms and get them in writing. A senior person with leverage would formally document when review volume makes genuine oversight impossible, because that documentation protects them and creates institutional risk for the employer. A junior contractor does none of these things. They don’t know to do them. They can’t afford the friction if they did.
The institution selected this workforce because of those absences. It is getting liability absorption at QA prices from people who don’t know to charge spear catcher prices. The class gradient runs in one direction: the people with the least ability to protect themselves are being placed in the highest-exposure positions, at the lowest prices, inside systems designed to make their nominal function impossible.
If you are currently employed in a role where your primary function is reviewing, approving, or signing off on AI outputs — at any volume that makes genuine per-item review implausible — you should understand what the role is before the first high-profile failure in your sector establishes it for you.
The time to negotiate the spear catcher’s contract is before you accept the spear catcher’s job.

