• | 9:04 am

What happens when the people building the AIs are replaced by robots?

The economic, social, and ethical fallout could be frightening.

What happens when the people building the AIs are replaced by robots?
[Source photo: elenabsl/Adobe Stock]

There’s no doubt we are witnessing a quiet shift in labor: artificial intelligence is no longer confined to experimental labs or consumer chatbots, it is now eroding the foundation of human labor in ways that are less visible, but potentially more consequential, than the headlines about “AI assistants” or “superintelligence.”

Last week, Google abruptly terminated 200 AI contractors, many of them involved in annotation and evaluation work. Officially, the company described this as part of a ramp-down, but workers pointed mainly to low pay and job insecurity. What matters is that the roles being cut are precisely those that ensure human oversight of AI systems: the raters, annotators, and evaluators who form the invisible scaffolding of “smart” or “intelligent” products.

In parallel, at an Axios event, Anthropic CEO Dario Amodei warned that AI is on track to displace many white-collar jobs within five years. Not decades. Not in some speculative future. Within the next cycle of corporate planning, the world of professional work, from law, finance, consulting, or even management, may look very different.

From invisible work to invisible loss

For years, the human labor that powers AI has been hidden behind the curtain: underpaid annotators in developing countries, moderators exposed to traumatic content, contractors who quietly clean and structure data so models can be trained. These jobs were rarely acknowledged, let alone respected. Now they are being erased altogether, as companies shift from human-in-the-loop to automation-in-the-loop.

The question is not only about employment. It is about what disappears when we remove human judgment from the system. Annotators catch ambiguities, flag dangerous edge cases, and apply moral reasoning that models cannot replicate. Raters provide cultural and linguistic nuance. When those roles are automated away, the systems may still function — but blind spots deepen, errors multiply, and biases are amplified. Efficiency rises, but resilience declines.

White-collar work on the clock

Amodei’s warning points to a broader reality: AI is moving up the value chain: it is no longer confined to support tasks, it is encroaching on analysis, writing, design, and even decision-making. The professional classes that once considered themselves insulated from automation are now squarely in the crosshairs. If blue-collar workers were the first wave of technological displacement in the 20th century, white-collar workers may be the second in the 21st.

The rhetoric from tech leaders often frames this as an opportunity: liberation from drudgery, new roles created, productivity unleashed. But the record of previous technological shifts is sobering. Yes, new roles emerge, but not necessarily for the same people, in the same places, or at the same wages. The painful transition costs are borne not by shareholders but by workers.

Regulation in fragments

Governments are beginning to notice. Italy has just introduced an AI legislative package that tries to target harmful deepfakes, set workplace standards, and enhance child protections. It is among the first attempts to go beyond reactive guardrails and impose preemptive controls on how AI can be used. Whether this becomes a model for others remains uncertain.

Spain, by contrast, is coming up with a mixed model: on one hand, it has enacted laws requiring labeling of all AI-generated content with heavy fines and formed the AESIA (Spanish AI Supervisory Agency) to oversee compliance; on the other, it is also heavily subsidizing AI development and innovation. The tension is real: measures meant to protect truth and transparency may impose burdens on small startups; enforcement capacity is far from guaranteed; and legislative clarity lags behind technological change. The Spanish case exemplifies a border zone: regulation and innovation both encouraged, but not always reconciled.

The irony is that regulation is moving fastest on visible harms that generate social alarm such as deepfakes, disinformation, and child safety, while the invisible erosion of labor goes largely unaddressed. It is easier to ban a fake video than to confront a business model that treats human judgment as a disposable cost.

Efficiency is not ethics

This moment forces a deeper question: just because AI can replace a human role, does it mean it should? Not every gain in efficiency is a gain in ethics. Removing moderators may cut costs, but at what price to safety? Automating evaluation may accelerate deployment, but at what risk of error? Displacing white-collar workers could improve the margin, but the costs to social stability are pretty clear. Are we all now behaving like Meta, “moving fast and breaking things,” focusing on profitability without paying attention to other potential consequences?

We should all exert some caution from a future in which AI not only mediates our information but also dictates our labor markets, silently restructuring what it means to be useful. Companies should not outsource that responsibility to regulators. They must recognize that the invisible revolution they are driving has significant human consequences, and those consequences will eventually come back to shape their own legitimacy.

The real invisible hand

The “invisible hand” in today’s AI economy is not Adam Smith’s market. It is the invisible labor that has powered machine learning, and the invisible losses that come when that labor is discarded. The layoffs at Google and the warnings from Anthropic are signals, not outliers. We are watching the early stages of a transformation that could redefine not just how we work, but what kinds of work society still values.

If companies want AI to be sustainable, they need to treat human judgment not as a temporary scaffold to be eliminated, but as a core component of systems that aspire to interact with the world. Without that, we risk building an economy where jobs are interchangeable, oversight is optional, and the human cost of efficiency is hidden until it is too late.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS