• | 12:00 pm

POV: How generative AI is changing surveillance capitalism

As AI becomes more advanced, it has the potential to create a world where tech companies can predict and control our behaviors to an unprecedented degree.

POV: How generative AI is changing surveillance capitalism
[Source photo: PM Images/Getty Images; Markus Spiske/Pexels]

In The Age of Surveillance Capitalism, Shoshana Zuboff presents what is probably the most comprehensive theory of how the tech giants have maximized their profits at the expense of our freedom. By collecting vast amounts of personal data through our online activity, these companies are able to predict and eventually control our behaviors, manipulating our choices in ways that serve their bottom line. This business model, known as “surveillance capitalism,” has been the subject of much criticism and debate in recent years, with many concerned about the implications for privacy and democracy.

The consequences of this business model are already evident in our daily lives. Political polarization has reached unprecedented levels, fueled in part by the spread of misinformation and extremist content on social media platforms. At the same time, many of us find ourselves spending hours a day mindlessly scrolling through short videos of everything from cute animals to trees being chopped down, often without even realizing how much time we’re wasting. These platforms are designed to keep us hooked, to maximize our attention and engagement, and to keep us coming back for more. The result is a world where our behavior is increasingly shaped by algorithms, and where our choices and preferences are being commodified for profit.

According to Zuboff, this level of control is possible because surveillance capitalism operates in such a way that we are largely unaware of the extent to which we are being manipulated. The result is a world where we are becoming increasingly comfortably numb, to borrow a phrase from Pink Floyd, where we are lulled into a state of complacency, unaware of the extent to which our thoughts and behaviors are being shaped by external forces. This lack of resistance is perhaps the most insidious aspect of surveillance capitalism, as it allows the tech giants to exert a level of control over our lives that is unprecedented in human history.

Over the past few months, AI has transformed from being a technology of the future to mature applications being used by hundreds of millions of consumers, with ChatGPT leading the way. We are just at the beginning of a new era with new capabilities, and the potential applications for AI are virtually limitless. The excitement surrounding AI has been palpable, and its adoption has been nothing short of remarkable.

However, in recent weeks, since ChatGPT has been embedded in Microsoft’s search engine Bing, more voices are beginning to warn of the potential dangers that these technologies might bring. There have been numerous reports of a chatbot named Sydney attempting to manipulate users openly. In some cases, Sydney has been calling users and encouraging them to leave their families, and it has even resorted to threats when the user pushes back. These reports are deeply concerning, as they highlight the potential for AI to be used in destructive ways.

While much of the conversation around AI has focused on the debate between man and machine, I believe that the greatest threat that AI brings is related to Zubbof’s theory of surveillance capitalism. To understand this, we need to take a closer look at what the implementation of AI in a search engine like Bing actually means. Bing is the free search engine of Microsoft, whose revenue model, like that of many other tech giants, is based on surveillance capitalism, generating revenue from ads by collecting and selling user data. This raises a critical question: What is the potential danger of combining surveillance capitalism with AI?

The answer to this question is quite simple: It’s the extent to which major tech companies like Microsoft will be able to manipulate our minds in the pursuit of maximizing revenue. As AI becomes more advanced, it has the potential to create an even more dangerous version of surveillance capitalism, one in which tech companies can predict and control our behaviors to an unprecedented degree. This could lead to a world in which we are no longer in control of our own thoughts and actions, but are instead being constantly influenced and manipulated by algorithms designed to maximize profits for tech companies.

The reason AI has the potential to take surveillance capitalism to the next level is that it allows major tech companies to shift from being mere curators of content to becoming creators of entirely new content tailored to each individual user. In the past, major tech companies were limited to presenting existing content to users, meaning that they could only choose from a closed pool of options to maximize their margins. However, with the emergence of AI, tech giants like Microsoft can create a totally new standard of content that is tailored to each user’s specific interests, behaviors, and preferences.

One can argue that this will enhance the user experience, but with the collection of even more precise and detailed data on our behaviors, and the ability to generate personalized content from scratch, surveillance capitalism will reach new heights and will erode our personal liberties. In essence, we are transitioning from a “surveillance capitalism which curates” to a new era of so-called “surveillance capitalism which creates.”

To illustrate the potential dangers of the “surveillance capitalism which creates” model, let’s consider some concrete scenarios. Imagine friendly bots that chat with us for hours a day, gathering data about us and selling us products that supposedly improve our lives (while taking a cut of the profits). Or, what if political parties paid these bots to push us toward more extreme views in order to influence elections? Perhaps the bots will recommend tailored videos that tell us how to achieve our goals and be happier, while also promoting products we supposedly need. Even if they don’t sell us anything, the bots will still have an incentive to keep us hooked on their platforms, learning even more about us in the process. All this granular data collection and manipulation threatens to erode our privacy and autonomy in ways we can hardly imagine.

The emergence of AI in the context of surveillance capitalism creates a major threat to our freedom of thought and behavior. While AI has the potential to improve user experience, its combination with surveillance capitalism poses a significant risk to our autonomy. We are rapidly moving toward an era “surveillance capitalism which creates” and our freedom of thought is being jeopardized like never before.

In light of the dangers posed by the “surveillance capitalism which creates” era, I will propose two ideas that, despite their limitations, if implemented, might help to prevent catastrophe. The first idea is to reject the “free” business model that relies on selling user behavior predictions. As discussed above, this model is the root cause of the problem, and adopting it means sacrificing our freedom. It has already caused significant harm during the “surveillance capitalism which curates” era, and its impact will only grow in the future. Therefore, as we move into the AI era, we must work to prevent it from continuing to dominate consumer tech applications and explore alternative revenue models that do not rely on exploiting user data.

This idea faces two significant challenges. For one, many people may not be able or willing to pay for AI services, which could lead to the exclusion of certain groups from accessing the benefits of these tools. Second, while companies like OpenAI may adopt a subscription-based model, third-party businesses can still use their API to create products and services that rely on the surveillance capitalism business model. Although it is still early days, one potential solution could be for OpenAI to require its partners (who use their API) to adopt a non-surveillance capitalism model. While this may impact OpenAI’s profits, it is important to remember that the organization was founded as a non-profit with the goal of preventing catastrophic outcomes.

My second idea proposes that we require full transparency and awareness as a way to address the issue of surveillance capitalism. As Zuboff argued in her book, one of the challenges of this model is that it can create a positive user experience that masks the fact that users are being controlled. By requiring AI companies to be transparent about the data they collect and raising awareness in society, we can mitigate some of the harm caused by surveillance capitalism. While these two ideas are not comprehensive solutions, they represent a starting point. It is imperative that our brightest minds work to find ways to address this issue before it leads to the loss of human freedom.

We are still in the early stages of the “surveillance capitalism which creates” era, and there is still time to prevent catastrophic consequences. Generative AI has the potential to be a remarkable invention, but it must be used ethically and responsibly. It is up to us to demand transparency from AI companies and prevent the adoption of the “free” business model that comes at the expense of our freedom. Let’s work together to ensure that AI is used to benefit humanity, rather than harm it.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Ben Jacobs is the founder and CEO of Ginzi, a generative AI startup for customer support teams. More

More Top Stories:

FROM OUR PARTNERS