• | 9:00 am

Who controls the machines that control artificial intelligence?

The hidden arms race of AI supercomputers.

Who controls the machines that control artificial intelligence?
[Source photo: Westend61/Getty images; sutlafk/Getty Images]

Picture a data center on the edge of a desert plateau. Inside, row after row of servers glow and buzz, moving air through vast cooling towersconsuming more electricity than the surrounding towns combined. This is not science fiction. It is the reality of the vast AI compute clustersoften described as “AI supercomputers” for their sheer scale, that train today’s most advanced models.

Strictly speaking, these are not supercomputers in the classical sense. Traditional supercomputers are highly specialized machines designed for scientific simulations such as climate modeling, nuclear physics, or astrophysics, tuned for parallelized code across millions of cores. What drives AI, by contrast, are massive clusters of GPUs or custom accelerators (Nvidia H100s, Google TPUs, etc.) connected through high-bandwidth interconnections, optimized for the matrix multiplications at the heart of deep learning. They are not solving equations for weather forecasts: they are churning through trillions of tokens to predict the next word.

Still, the nickname sticks, because their performance, energy demands, and costs are comparable to, or beyond, the world’s fastest scientific machines. And the implications are just as profound.

recent study of 500 AI compute systems worldwide found that their performance is doubling every nine months, while both cost and power requirements double every year. At this pace, the frontier of artificial intelligence is not simply about better algorithms or smarter architectures. It is about who can afford, power, and cool these gigantic machines, and who cannot.

The exponential moat

When performance doubles every nine months but cost doubles every 12, you create an exponential moat: each leap forward pushes the next frontier further out of reach for all but a handful of players.

This is not the familiar story of “open-source vs. closed-source models”: it is more fundamental. If you cannot access the compute substrate (the hardware, electricity, cooling, and fabs required to train the next generation ) you are not even in the race. Universities cannot keep up. Small startups cannot keep up. Even many governments cannot keep up.

The study shows a stark concentration of capability: the most powerful AI clusters are concentrated in a few corporations, effectively privatizing access to the cutting edge of machine intelligence. Once compute becomes the bottleneck, the invisible hand of the market does not produce diversity. It produces monopoly.

Centralization vs. democratization

The rhetoric around AI often emphasizes democratization: tools made available to everyone, small actors empowered, creativity unleashed. But in practice, the power to shape AI’s trajectory is shifting toward the owners of massive compute farms. They decide which models are feasible, which experiments get run, which approaches receive billions of tokens of training.

This is not just a matter of money. It is about infrastructure as governance. When only three or four firms control the largest AI clusters, they effectively control the boundaries of the possible. If your idea requires training a trillion-parameter model from scratch, and you are not inside one of those firms, your idea remains just that: an idea.

Geopolitics of compute

Governments are beginning to notice. At the 2025 Paris AI Action Summit, nations pledged billions to upgrade national AI infrastructure. France, Germany, and the U.K. are each moving to expand sovereign compute capacity. The United States has launched large-scale initiatives to accelerate domestic chip production, and China, as always, is playing its own game, pouring resources into massive wind and solar buildouts to guarantee not only chips, but the cheap electricity to feed them.

Europe, as usual, is caught in the middle. Its regulatory frameworks may be more advanced, but its ability to deploy AI at scale depends on whether it can secure energy and compute on competitive terms. Without that, “AI sovereignty” is rhetoric, not reality.

And yet, there is a darker irony here. Even as governments race to assert sovereignty, the real winners of the AI arms race may be corporations, not nations. Control over compute is concentrating so quickly in the private sector that we are edging closer to a scenario long depicted in science fiction: corporations wielding more power than states, not only in markets but in shaping the very trajectory of human knowledge. The balance of authority between governments and companies is shifting, and this time, it is not fiction.

Environmental reckoning

There is also a physical cost. Training one frontier model can require as much electricity as a small city uses in a year. Cooling towers demand enormous volumes of water, and while much of it is returned to the cycle, siting matters: in water-scarce regions, the strain can be significant. The carbon footprint is similarly uneven. A model trained on grids dominated by coal or gas produces orders of magnitude more emissions than one trained on grids powered by renewables.

In this sense, the “AI sustainability” debate is really an energy debate. Models are not green or dirty by themselves. They are as green or as dirty as the electrons that feed them.

What efficiency cannot buy

Efficiency alone will not solve this problem. Each generation of chips gets faster, each architecture more optimized, but the aggregate demand continues to rise faster than the gains. Every watt saved at the micro level is consumed by the macro expansion of ambition. If anything, efficiency makes the arms race worse, because it lowers the cost per experiment and encourages even more experiments.

The result is a treadmill: more compute, more power, more cost, more centralization.

What to demand

If we want to avoid a future in which AI’s destiny is set by the boardrooms of three companies and the ministries of two superpowers, we need to treat compute as a public concern. That means demanding:

  • Transparency about who owns and operates the largest clusters.
  • Auditability of usage: what models are being trained, for what purposes.
  • Shared infrastructure, funded publicly or through consortia, so that researchers and smaller firms can experiment without asking permission from trillion-dollar corporations.
  • Energy accountability, requiring operators to disclose not just aggregate consumption but sources, emissions, and water footprints in real time.

The debate should not stop at “which model is safest” or “which dataset is fair.” It should extend to who controls the machines that make the models possible in the first place.

The machines behind the machines

The next control point in AI isn’t software: it’s hardware. The massive compute clusters that train the models are now the real arbiters of progress. They decide what’s possible, what’s practical, and who gets to play.

If history teaches anything, it is that when power centralizes at this scale, accountability rarely follows. Without deliberate interventions, we risk an AI ecosystem where innovation is bottlenecked, oversight is optional, and the costs, from financial or environmental to human, are hidden until it is too late.

The arms race of AI “supercomputers” is already underway. The only question is whether society chooses to watch passively as the future of intelligence is privatized, or whether we recognize that the machines behind the machines deserve just as much scrutiny as the algorithms they enable.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS