- | 9:00 am
The real AI threat is algorithms that ‘enrage to engage’
Feeds that amplify hysterical content are accelerating extremism and a grievance society that endangers us all.
Media personalities and online influencers who sow social division for a living, blame the rise of assassination culture on Antifa and MAGA. Meanwhile, tech CEOs gin up fears of an AI apocalypse. But they’re both smokescreens hiding a bigger problem. Algorithms decide what we see, and in trying to win their approval, we’re changing how we behave.
Increasingly, that behavior is violent. The radicalization of young men on social networks isn’t new. But modern algorithms are accelerating it.
Before Facebook and Twitter (X) switched from displaying the latest post from one of your friends at the top of your feed with crazy, outrageous posts from people you don’t know, Al Qaeda operatives were quietly recruiting isolated and disillusioned young men to join the Caliphate one by one. But the days of man-to-man proselytizing have long since been replaced by opaque algorithms that display whatever content gets the most likes, comments, and shares.
Enrage to engage is a business model. Algorithmic design amplifies the most hysterical content, normalizing extremist views to the point where outrage feels like civic participation. It’s a kind of shell game.
Here’s how it works:
- Politicians and CEOs spin apocalyptic narratives
- Online influencers chime in
- Algorithms spread the most outrageous content
- Public sentiment hardens
- Violence gains legitimacy
- Our democracy erodes
The algorithms don’t just amplify—they also decide who sees what, creating parallel worlds that make it harder for us to understand our opposing tribe members. For example, Facebook’s News Feed algorithm prioritizes posts that generate emotional reactions. YouTube’s recommendation system steers viewers toward similar content that keeps them watching. And it’s a total mystery how TikTok’s For You Page keeps users glued to the app.
You search for a yoga mat on your phone, and the ranking algorithms decide you’re a liberal. Your neighbor searches for trucks, and the system tags them as a conservative. Before long, your feed fills with mindfulness podcasts and climate headlines, while your neighbor’s features off-roading videos and political commentary about overregulation. Each of you thinks you’re just seeing “what’s out there,” but you’re actually looking at customized realities.
Up to now, the killing of right-wing activist Charlie Kirk, along with the brutal killings of elected officials Melissa Hortman and her husband, embassy staffers Sarah Lynn Milgram and Yaron Lischinsky, United Healthcare CEO Brian Thompson, and Blackstone real-estate executive Wesley LePatner have all been tied to a rising wave of political violence. They are more likely the result of online radicalization being accelerated through social media algorithms.
Given the snail’s pace of our judicial system, and the labor-intensive process of reconstructing someone’s path to radicalization online, the smoking gun is elusive. In the 2018 Tree of Life synagogue shooting, it took five years to reach a conviction. In the meantime, more people consumed extremist content giving rise to what the FBI now calls nihilistic violent extremism, which is violence driven less by ideology than by alienation, performative rage, and the quest for social status. By the time one case is resolved, new permission structures for violence take root, showing just how powerless our legal system is at policing social media platforms.
What drives these communities isn’t ideology so much as a search for belonging, status, and personal power. The need for validation is intertwined with whatever or whoever is commanding the most attention at any given moment. These days, the issue that has captured the most attention is an AI apocalypse. “As new grievances take shape around artificial intelligence and national fears of job loss, technology executives are increasingly exposed to threats of physical violence,” says Alex Goldenberg, director of intelligence at Narravance, which monitors social media in real time to detect threats for clients.
Are predictions of AI joblessness stoked by algorithmic fear-mongering a recipe for social unrest? While high-profile tech CEOs have long traveled with security details, new data suggests those threats have extended to all corporate sectors. A study of over 2,300 corporate security chiefs at global companies with combined revenues exceeding $25 trillion found that 44% of the companies are actively monitoring mainstream social media, the deep web (content not indexed by Google), and the dark web (where criminals and dissidents go for cover). Two-thirds of those companies are increasing their physical security budgets in response to rising online threats, according to the study by security company Allied Universal.
“Before December, fewer than half of CEOs had any kind of executive protection. Now boards are demanding it,” says Glen Kucera, president of Allied Universal. Executives make up 30% of a company’s value, and shareholders want them protected. Companies are responding by hardening their perimeters, hiring armed escorts and social media threat analysts, and addressing vulnerabilities at executives’ homes. For CEOs, AI is both a windfall and a minefield. It’s too lucrative to ignore, but too unsettling to discuss freely. “High-profile people making controversial announcements about AI are at higher risk,” says Kucera.
According to Michael Gips, managing director at multinational financial and risk advisory firm Kroll, these findings fit into a broader trend, “We’re living in a grievance culture now,” he says. “If there’s something to be grieved about, the risk is there.”
Even the people shaping this technology acknowledge its risks. Sam Altman, the CEO of OpenAI, has said he believes the worst case for AI is “lights out for all of us.” Elon Musk has made similar warnings, cautioning that there’s “some chance that [AI] goes wrong and destroys humanity.” OpenAI cofounder Ilya Sutskever reportedly talked about building a doomsday bunker for OpenAI engineers in the post-AGI world.
Narravance analysts say apocalyptic narratives around AI—especially those centered on job loss—promote online radicalization. After reading dystopian narratives about AI-driven unemployment, 17.5% of U.S. adults in a statistically significant sample said violence against Musk is justified. Musk’s remark about universal job loss spread rapidly across social platforms, stripped of nuance, meme-ified, and reframed as a prophecy of societal collapse. In online communities where people are hungry for belonging and validation, Musk’s rhetoric becomes the basis of “permission structures” that rationalize violence.
Prior to his resignation from the Department of Government Efficiency (DOGE), negative sentiment toward Musk was higher. In March 2025 nearly 32% of Americans said they believed his assassination would be justified, according to another Narravance study. On Sam Altman’s blog, the OpenAI CEO wrote, “The development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.” The more tech leaders issue dire predictions, the more support for unjustified violence against them grows.
Alarmingly, Narravance also found that respondents said violence would be justified against Alex Karp, CEO of surveillance and defense AI company Palantir (15.4%), Meta CEO Mark Zuckerberg of (14.5%), Amazon CEO Jeff Bezos (13.8%), and OpenAI CEO Sam Altman (13.3%).
Fear of obsolescence
“As soon as Charlie Kirk was assassinated, a video went around the world. Ten-year-olds saw it within hours,” said Jonathan Haidt, author of The Anxious Generation, at the Fast Company Innovation Festival.
Haidt argues that since 2012 the share of adolescents who say their lives feel “useless” has more than doubled, and that boys in particular, left without traditional guidance and immersed in social media, gaming, and pornography, are struggling to find a path to adulthood.
“If you’re a boy, and your life feels useless, and you see no future, everything is about getting fame or money. You have to get rich quick or become famous, otherwise you’ll lose in the mating game,” says Haidt. “Boys around the world, historically, have gambled. Do something big. Get recognition,” he says.
A former senior social media executive who spoke on the condition of anonymity said negative narratives create desperation. “When you give people doom scenarios, they’re going to be willing to do outrageous things,” he says. It’s an unfortunate by-product of the social media business.
Social media meltdown
“Social media is a cancer,” Utah Governor Spencer Cox said on 60 Minutes a few weeks after Kirk’s murder. “It’s taking all of our worst impulses and putting them on steroids . . . driving us to division and hate. These algorithms have captured our very souls.” His dire warning underscores how platforms reward outrage, feed polarization, and erode the boundaries that once kept political disagreement from spilling into violence and chaos.
In another interview, on Meet the Press, Cox argued that social media companies have “hacked our brains,” getting people “addicted to outrage” in ways that fuel division and erode agency. He said he believes that social media has played a direct role in every assassination or attempt in the past five to six years. “The conflict entrepreneurs are taking advantage of us, and we are losing our agency, and we have to take that back,” he said.
When outrage gets amplified, all engagement looks like an endorsement, people mistake that as truth, even though it may be false or, worse yet, coordinated inauthentic activity spun up by the Chinese controlled TikTok algorithm or Russian bot farms.
According to a report from safety research nonprofit FAR.AI, with artificial intelligence already more persuasive than humans, and frontier LLMs guiding political manipulation, disinformation, and terrorism recruitment efforts, the risks are already multiplying exponentially. Predictions of a dystopian, jobless AI future pale by comparison.
The real threat is the erosion of human judgment itself. The existential risk of AI—first raised in 1975 by computer scientist Joseph Weizenbaum in his prescient book Computer Power and the Human Reason—is not joblessness or humanity suspended in Matrix-style bio-pods. The danger isn’t sentient machines. It’s algorithms engineered to keep us engaged, enraged, and endlessly divided. The apocalypse won’t come from code, but from our surrender to it.























