• | 9:00 am

AI is entering its ‘Napster’ phase. This is who’s going to ‘own’ it next

The CEO of Ironclad says we’ve been hearing that “data is the new oil” for years, and this new era of privately trained AI is poised to prove it.

AI is entering its ‘Napster’ phase. This is who’s going to ‘own’ it next
[Source photo: Thirdman/Pexels; Bartosz Kwitkowski/Unsplash]

Much of what we read about AI these days leans toward either breathless excitement or existential fear: Will we reach the singularity and attain impossible levels of achievement? Or will AI be a destructive force, hollowing out our workforce and spinning out of control?

While no one can predict the long-term impact of such a new technology, I think AI’s next act will be shaped less by the breathless futurists and more by the slow-churning safeguards of our legal system and the thinkers at the edge of copyright law.

We have seen this before. Whenever big disruptive technologies emerge, they can rapidly gain users before the law has a chance to catch up.

Consider the rise of peer-to-peer (P2P) file sharing. In 1999, Napster took the world by storm, offering a revolutionary new technology that allowed anyone to contribute, search for, and download content. Overnight, millions of users were exchanging digital files—almost all of them copyrighted material.

I remember being in college when it came out. My friends and I were blown away by how much music we could access on demand. You no longer had to track down a physical copy of what you wanted to listen to. And you could access scale on an order of magnitude previously unimaginable.

The problem was, the underlying legal model hadn’t caught up. Napster and its clones—Limewire, Kazaa, and others—could attract plenty of users, but could not survive the inevitable barrage of lawsuits from content owners.

COMPELLING INNOVATION ISN’T ENOUGH

The lesson is clear: Compelling new innovation is not enough to create an enduring market presence. You also need a business model that allows both innovators and copyright holders to reap rewards within the context of the legal system.

Just as Napster provided a radical new solution to the challenge of accessing music and content, today’s public LLMs offer users compelling value, allowing for near-instantaneous synthesis and delivery of customized information. Instead of having to search for everything and piece it together ourselves, people can access personalized content that meets their precise needs.

The problem is that, just like the P2P companies, today’s generative AI services are dependent on the indiscriminate use of copyrighted content and data. The dirty secret of AI, and LLMs in particular, is that they are trained mostly on material owned by others.

We are already seeing substantial legal threats to the freewheeling use of copyrighted material by LLMs. Google, Microsoft, OpenAI, and others are all facing a range of lawsuits for training models on copyrighted material, scraping user data, copyright infringement of millions of images, and collecting biometric data without consent, among others. And we are already starting to see the results of these lawsuits, manifesting in diminishing effectiveness of LLMs.

This is just the beginning. As today’s leading LLMs essentially train their models on everything they can access, almost anyone can become a plaintiff. This exposes them to extreme legal, political, and regulatory risks.

A WAY FORWARD WITH PRIVATE AND RESPONSIBLE AI

The ultimate impact of the P2P companies of the 1990s is that they helped give rise to the legal streaming model that is now the standard for content distribution. Streaming got a lot worse before it got better. It took years, but a legal and economic model eventually emerged that allowed services like Apple Music, Spotify, and others to thrive.

In a strikingly similar way, the commercial use of AI may now be entering its Napster phase: an exciting but legally unsustainable moment of innovation and technical progress. In fact, some think that today’s public LLMs may have already peaked—and it might be years before they return to their previous level of capability. That will only be once the thorny legal issues around generative AI are properly addressed by a new legal standard for identifying and managing source content and compensating copyright holders.

The next chapter of AI will be written by those who are building the legal framework that will enable LLMs to provide value responsibly. A lot will be in flux until that framework is built. One thing that seems to be a safe bet is that the companies who will survive the new legal order will be ones who already own large amounts of their own, private data. Not just because they will be able to withstand current legal scrutiny, but perhaps more important, because they are the ones who won’t betray customers’ right to control their own data; they are the ones who are doing the right thing.

We’ve been hearing that “data is the new oil” for years, and this new era of privately trained AI is poised to prove it. In industry after industry, the new competitive frontier will be around the application of AI to proprietary data, as companies take what generative AI does so well— allowing people to find personalized meaning from a large and complex base of data—and putting it to work for their businesses and their customers without worrying about exposing their data to competitors or outside sources.

And the exciting thing is, these companies that have AIs that are private, consider customers’ rights paramount, and are likely to withstand whatever legal turmoil lays ahead—are already here. They own large amounts of their own, private data and are years ahead in being able to create powerful, specialized AI tools that are free from legal risk and entanglement.

As William Gibson once said, “The future is already here. It’s just not evenly distributed yet.”

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Jason Boehmig is the CEO and cofounder of Ironclad. More

More Top Stories:

FROM OUR PARTNERS