• | 9:00 am

What happens when the AIs turn on each other?

The arrival of competitive trading algorithms in financial markets started the era of competitive AI. That dynamic could soon leak into the real world, competing in other areas of business and society.

What happens when the AIs turn on each other?
[Source photo: Andriy Onufriyenko/Getty Images]

 

There is something missing from the grand discussion currently underway on AI. Something worrying and urgent. Beyond the spats over AGI, the AI doomers versus the e/acc fanboys, a pressing question needs to be asked.

What happens when we turn our AIs on each other?

This possibility—let’s call it competitive AI (or CAI for short)—is a strangely absent topic in all the social media bickering. But if you think it is fantasy, think again. At a recent hackathon, coders created LLM Colosseum where leading models were taught Street Fighter 3 and made to duke it out.

A colosseum is useful shorthand for an AI category that already exists but could be on the verge of a breakout. Battling AIs won’t remain trapped in screens, shooting fireballs at each other—at least not for long. Soon they will leak into the real world, competing in important areas of business and society.

What will this mean? And why aren’t we talking about it? Yes, it might offer new opportunities and create efficiencies—but that’s the upside. It could also get weird, unfair, and, in some cases, tragic.

THE DEFINITION OF CAI

To keep it simple, CAI is AI bots or agents that directly compete with each other or humans in business or society.

Competitive AIs already exist. They are out there, doing the mundane and the extraordinary, from buying goods to trading shares and even citing legal precedents. They may already be on the battlefield.

Beyond beat ’em ups, CAI is emerging most notably in supply chains, exemplified by startups like Pactum. Simply put, Pactum handles negotiations for buying. Its AI buys resources and goods for massive companies like Walmart from smaller sellers.

It is just one startup, an interesting one, but the underlying direction it sets is technological competition. It is highly unlikely that Pactum will remain the solo buying agent. Its existence demands a response.

A move to AI-powered, adversarial buying will no doubt be efficient but could create unexpected issues. Before the coming, generative wave of CAI sets in, it is useful to look at the precedents.

FLASH CRASHES AND SHOPPING BOTS: THE PRECURSORS TO CAI

If we are going to look at how CAI may play out, it is worth considering existing examples of competitive AI.

The arrival of competitive trading algorithms in financial markets started the era of competitive artificial intelligence, laying some groundwork for understanding the complexities it poses. These algorithms, designed to carry out trades at speeds and in complex patterns that are impossible for human traders, have revolutionized financial trading irreversibly.

It was the rise of high-frequency trading that really started the aggressive shift toward algorithms for competitive advantage. These high-tech trading strategies deploy complex algorithms to sift market data, executing trades many times per second.

While trading algorithms have brought a kind of hyper-efficiency to markets, they have also been at the center of new anomalies, most notably flash crashes. Now a well-documented but still unsolved phenomenon, a flash crash is a rapid, uncontrolled fall in share prices within a wildly short time frame, often recovering almost as quickly.

In other words, AI competition has already resulted in weird, unexpected events. Beyond these strange impacts, it also created a serious barrier to entry. Companies that win the stock market are highly technologized. Those without AI smarts stand a slim chance.

From high-frequency trading, the evolution of competitive AI extended into the consumer sector, notably through shopping bots. These tools are designed to outperform human buyers and other bots in purchasing desirable items, such as the latest gaming GPUs or limited-edition fashion.

The rise of scalper bots has been distorting the markets for goods like PlayStations for years. These bots automate the process of finding and purchasing stock the second it comes online, often reselling at higher prices.

This development has led to the creation of super scalpers—individuals or groups who leverage sophisticated AI tools to dominate the online resale market. The competitive nature of these AI bots means that as they evolve, the competition becomes increasingly fierce. They are constantly being updated to buy faster, avoid anti-bot measures, and exploit any possible method to outdo both other bots and humans.

The implications of this AI arms race for shares and shopping are far-reaching and instructive. As these tools become more advanced, they concentrate success in the hands of a few, well-resourced operators, at the expense of the ordinary consumer. This could lead to a scenario where in-demand goods become increasingly inaccessible to the average person, creating a divide between CAI haves and have-nots.

Concert tickets anyone?

THE COMING WAVE OF CAI

That’s the history; now back to the present day. Pactum and startups like it represent the coming wave of generative, multimodal CAI. Most are in a lonely evolutionary stage, yet to meet their dance partners. However, this grace period is running down. Without rapid regulation, CAI looks inevitable.

Of all the sectors set to be hit by CAI, law looks particularly vulnerable. Litigation is often adversarial by nature, and we are seeing rapid creation of foundation models and copilots for the sector. Harvey is the most developed legal copilot, backed by OpenAI, and is already being used by law firms.

The “augmented lawyer” phenomenon is CAI at one remove, and will no doubt reach a chip-frying peak in the upcoming Elon Musk versus OpenAI case, but it looks like AI law agents could soon be directly competing over legal matters.

Harvey is hinting that it will be used in direct negotiations. This kind of automated law is already rather bizarre but becomes very Inception-like when you consider that the agents could all be powered by the same few foundation models. Who wins then? The same underlying model  running sock-puppet auto-lawyers and even judges in the same dispute is an odd plot twist, one simply not being addressed in popular AI discourse.

The possibility of CAI becoming a weapon in reputation battles seems an inevitable development in a society addicted to culture wars. Startups like Signal AI will form the defense, the offense (currently) will be hacked together, or multistep agents like Devin will be repurposed to meaner tasks.

The thought of sophisticated agents left to wander social media and online archives with the sole purpose of personal takedowns is a sad one. Of course, we are already seeing reputational CAI play out on a larger scale. In elections and geopolitics, smart trolls and propaganda farms are pumping out information pollution, often in direct, automated response to one another.

This CAI “spamnami” will no doubt impact more ordinary activity. With the recent integration of Copilot and Adobe, CAI could soon enter marketing, with ad agents in adversarial conversation across social. Adobe now talks of the content supply chain as it is building the technology to pump out endless generative imagery and messaging. When combined with CAI, it won’t be long until it is dynamic, reacting directly to competitors.

That’s right, spam will soon be having its own conversations.

War is, of course, the obvious and most depressing use case for CAI. Battlefield chatbots, such as those tested by the U.S. Army, and this from Palantir change the nature of traditional warfare, suggesting a future where AI-driven militaries are able to compete over mud and trenches, embodied in drones and cameras.

This shift toward autonomous defense raises long-pondered yet chilling questions about the role of AI in war. In recent days, there are reports of an AI called Lavender being used in the Israeli-Palestinian conflict.

How long until it meets an adversary?

THE “SMART CAPITAL” PROBLEM

The biggest issue CAI threatens is the deep inequality it could foster between companies, nations, and individuals. The massive lift in capability that CAI represents for compute-rich and AI powered entities will not be evenly distributed. We could be on the verge of creating a divided world.

In his optimistic essay “Moore’s Law for Everything,” Sam Altman references this shift-to-AI advantage. As artificial intelligence takes hold, he believes one of the dominant holders of wealth will be “companies, particularly ones that make use of AI.”

The essay outlines how an AI-driven world will diminish the antagonism over resources. However, it does not mention what happens in a world where CAIs are designed explicitly to fight for and gain those resources.

CAI is a coming force that will roll power further toward capital and away from labor. In fact, CAI could create AI-powered “smart” capital, a force that will beat old “dumb” capital, not to mention mortally trumping human labor.

Personally, I have no doubt that CAI agents will soon fight it out for the world’s stuff. The challenge for society will be to ensure that the very real benefits of AI do not come at the cost of fairness and accessibility for those with lesser or slower AI.

We urgently need to start the CAI conversation. The question is not when it is coming, but what to do about it.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Patrick Hussey is an enterprise tech consultant who has delivered strategy, seminars, and leadership training on the impacts of generative AI. More

More Top Stories:

FROM OUR PARTNERS