• | 9:00 am

How deepfakes are radically shifting the creator landscape

Adobe chief strategy officer Scott Belsky explains how developers are pioneering new ways to verify human-generated content.

How deepfakes are radically shifting the creator landscape
[Source photo: AIPERA/Adobe Stock, Elena Pimukova/Adobe Stock]

Deepfake technology and the malicious use of AI is causing widespread anxiety, especially as we approach November’s U.S. election. Adobe chief strategy officer and EVP of design and emerging products, Scott Belsky explains what deepfakes mean for the media and creator landscape, and how developers like Adobe are pioneering new ways to verify human-generated content for everyday consumers.

This is an abridged transcript of an interview from Rapid Response, hosted by a former editor-in-chief of Fast Company, Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode.

In the last couple of years, with the rise of generative AI like Dall-E, Midjourney, and Adobe’s Firefly, the environments have become a little more complicated. Multiple videos have gone viral in the last year of designers worried about being replaced by AI. 

Yeah, in some ways though, history rhymes, right? When you look back and you see the advent of the digital camera, remember that digital photographers weren’t even allowed admission to many of the American Photography Associations. And even go further back and people who used to do family portraits as painters were horribly offended by the concept that you could click a button and suddenly capture a family portrait using this new thing called film, right? Technology has always unleashed more creative potential.

So many of our customers report spending most of their time in our tools doing mundane, repetitive work. And they always ask us for features that help them become more productive and expressive on the creative side and do less of the annoying stuff. How can we not use this technology to do that? And yet at the same time, how should we train these models? And how should we make sure that people’s IP is protected? How do we make sure that contributors are compensated? We’ve tried to take the highest road possible on these fronts. And yet at the same time, there are folks who have yet to start to play with the technology and we’re trying to push them to experiment with it.

One of the most concerning AI uses is deepfakes—things that look real and sound real, but aren’t. The New York Times had a recent article, I don’t know if you’ve seen it yet, but about scammers deepfaking Elon Musk. And we’ve seen reports of school kids making fakes of classmates, putting the heads of fellow students onto naked bodies. These risks are real.

I like to say that we’re going from the era of “trust but verify” to an era of “verify then trust.” It’s a new world. One thing that Adobe has been trying to lead the pack on, but it’s an open-source nonprofit consortium, is all around the Content Credentials Movement. And the idea is that good actors, people who want to be trusted can actually add credentials to their content so they can show how it was made. You can say this is the model that was used. These are the modifications that I made. And you can look at the content and say, “Hey, you know, do I feel that I verified this media enough to trust it or not?” I think that’s going to become the default in the future.

As a business person, it sounds quite tricky because on the one hand, you’re saying you have a responsibility—you’re trying to get Silicon Valley to look at the risks and address them. And on the other hand, you know that if you don’t make these tools available quickly, like, someone else will. And so, how restrained can you afford to be to stay on top of the business? I’m thinking of how OpenAI released ChatGPT, in part, because they had nothing to lose and folks like Google who delayed because of the risks, then had to play catch up. So you’re balancing on this. 

Well, we are. And I encourage our teams to anchor themselves on our customers. And here’s the thing: When you talk to the average customer, they are excited about superpowers that make them more productive and help them grow their career. Of course, there are some folks who say, “Hey, you know, burn the boats and let’s just do whatever is out there,” and “Who cares about how it was trained or whatever else.” And then there’s also another group on the other end of the spectrum that says, “Don’t do AI, Adobe.” Like, “Just don’t do it. Just ignore it. Why should you have to use this technology? We don’t want anything to change.” But the average is a large group in the middle that are very pragmatic and responsible about this technology, and those are the folks that we’re taking the pulse of constantly and anchoring ourselves to, and that also feeds the business.

And it’s the biggest customer base.  

100%.

In the political sphere, we recently had Donald Trump alleging that images of Kamala Harris rallies were AI-generated. Are we likely to see even more discussion and conversation about deepfake activity during the election? And is that a distraction or is that, like, significant?

Whether it is the internet, whether it is Bitcoin, these are all technologies that were very much used for illicit purposes first. This technology is no different. And we will certainly see early use cases that are concerning.

I think the most important thing, if you take a step back, is that we talk about the use cases that we do, you know, even though it’s sensational at the moment to write articles about, “Oh my God, a deepfake!” It actually is good because it’s popularizing the fact that you shouldn’t be trusting everything you see anymore. Or when you hear these stories about someone getting a call from their grandmother, it’s their grandmother’s voice asking for money. It’s a very scary thing, but the answer is not to put a stop to the use of the technology. The answer is to make sure people know how it can be abused and then develop precautions.

The producer of this show asked the design team at our company about what I should ask about Adobe and the issue that they were most interested in was copyright rules, where copyright will evolve in this AI world.

The concerns and questions at the top of creatives’ minds are different now than they’ve ever been in my career.

I’ll proactively mention this debacle we had with the updating of our Terms of Use, which had not changed in 11 years. We had made a modification around when content is scanned for child sexual exploitation imagery. But the change that was made was paraphrased by our legal team and a pop-up that customers got when they had to accept the Terms of Use that said “Adobe has updated its content moderation policy.” Which caused some customers to be concerned about, “Oh, Adobe is looking at my work.” We were really just doing what’s legally obligated. But when they went in and saw the limited license that Adobe resizes your content to work on different devices, these are sort of normal things that anyone in technology just knows are par for the course.

But customers were suddenly like, “Wait a second, limited license? Does that mean you’re training on our content?” Now, we’ve never trained on our customers’ content. We’ve always been very, very clear how our models were trained for generative AI and our compensation program for those who do have content that is part of Adobe stock that’s trained on, et cetera.

But the lesson that I took away was that every company now needs to go back and revisit its policies around this stuff to make sure that they are taking into account the concerns of the modern creator.

We went through the entire Terms of Service, we annotated it. We said exactly what we do and don’t do. Usually companies don’t say what they won’t do in Terms of Service. Usually it’s just what they do, or what they’re getting customers to agree to. We were explicit in what we don’t do. And I’m actually really proud of it. I would say it’s probably the most creator-friendly Terms of Use on the internet right now, as it relates to tools companies. But it was a wake-up call. And so I think that to your point about copyright and copyright law, we have to be very proactive now. The legal department is a really important part of everyone’s strategy now to make sure you do the right thing.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Robert Safian is the editor and managing director of The Flux Group. From 2007 through 2017, Safian oversaw Fast Company’s print, digital and live-events content, as well as its brand management and business operations. More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter