• | 9:00 am

Can the Middle East fight unauthorized AI-generated content with trustworthy tech?

Fake AI content is fueling consent violations and creating a wave of misinformation, and the tools to stop it remain unclear.

Can the Middle East fight unauthorized AI-generated content with trustworthy tech?
[Source photo: Krishna Prasad/Fast Company Middle East]

Since its emergence a few years back, generative AI has been the center of controversy, from environmental concerns to deepfakes to the non-consensual use of data to train models. One of the most troubling issues has been deepfakes and voice cloning, which have affected everyone from celebrities to government officials. 

In May, a deepfake video of Qatari Emir Sheikh Tamim bin Hamad Al Thani went viral. It appeared to show him criticizing US President Donald Trump after his Middle East tour and claiming he regretted inviting him. Keyframes from the clip were later traced back to a CBS 60 Minutes interview featuring the Emir in the same setting.

Most recently, YouTube drew backlash for another form of non-consensual AI use after revealing it had deployed AI-powered tools to “unblur, denoise, and improve clarity” on some uploaded content. The decision was made without the knowledge or consent of creators, and viewers were also unaware that the platform had intervened in the material.

In February, Microsoft disclosed that two US and four foreign developers had illegally accessed its generative AI services, reconfigured them to produce harmful content such as celebrity deepfakes, and resold the tools. According to a company blog post tied to its updated civil complaint, users created non-consensual intimate images and explicit material using modified versions of Azure OpenAI services. Microsoft also stated it deliberately excluded synthetic imagery and prompts from its filings to avoid further circulation of harmful content.

THE RISE OF FAKE CONTENT

Matin Jouzdani, Partner, Data Analytics & AI at KPMG Lower Gulf, says more and more content is being produced through AI, whether it’s commentary, images, or clips. “While fake or unauthorized content is nothing new, I’d say it’s gone to a new level.  When browsing content, we increasingly ask, ‘Is that AI-generated?’ A concept that just a few years ago barely existed.”

Moussa Beidas, Partner and ideation lead at PwC Middle East, says the ease with which deepfakes can be created has become a major concern.

“A few years ago, a convincing deepfake required specialist skills and powerful hardware. Today, anyone with a phone can download an app and produce synthetic voices or images in minutes,” Beidas says. “That accessibility means the issue is far more visible, and it is touching not just public figures but ordinary people and businesses as well.”

Though regulatory frameworks are evolving, they still struggle to catch up to the speed of technical advances in the field. “The Middle East region faces the challenge of balancing technological innovation with ethical standards, mirroring a global issue where we see fraud attempts leveraging deepfakes increasing by a whopping 2137% across three years,” says Eliza Lozan, Partner, Privacy Governance & Compliance Leader at Deloitte Middle East.

Fabricated videos often lure users into clicking on malicious links that scam them out of money or install malware for broader system control, adds Lozan.

These challenges demand two key responses: organizations must adopt trustworthy AI frameworks, and individuals must be trained to detect deepfakes—an area where public awareness remains limited.

“To protect the wider public interest, Digital Ethics and the Fair Use of AI have been introduced and are now gaining serious traction among decision-makers in corporate and regulatory spaces,” Lozan says.

DEFINING CONSENT

Drawing on established regulatory frameworks, Lozan explains that “consent” is generally defined as obtaining explicit permission from individuals before collecting their data. It also clearly outlines the purpose of the collection—such as recording user commands to train cloud-based virtual assistants.

“The concept of proper ‘consent’ management can only be achieved on the back of a strong privacy culture within an organization and is contingent on privacy being baked into the system management lifecycle, as well as upskilling talent on the ethical use of AI,” she adds.

Before seeking consent, Lozan notes, individuals must be fully informed about why their data is being collected, who it will be shared with, how long it will be stored, any potential biases in the AI model, and the risks associated with its use.

Matt Cooke, cybersecurity strategist for EMEA at Proofpoint, echoes this: “We are all individuals, and own our appearance, personality, and voice. If someone will use those attributes to train AI to reproduce our likeness, we should always be asked for consent.”

There’s a gap between technology and regulation, and the pace of technological advancement has seemingly outstripped lawmakers’ ability to keep up. 

While many ethically minded companies have implemented opt-in measures, Cooke says that “cybercriminals don’t operate with those levels of ethics and so we have to assume that our likeness will be used by criminals, perhaps with the intention of exploiting the trust of those within our relationship network.”

Beidas simplifies the concept further, noting that consent boils down to three essentials: people need to know what is happening, have a genuine choice, and be able to change their mind.

“If someone’s face, voice, or data is being used, the process should be clear and straightforward. That means plain language rather than technical jargon, and an easy way for individuals to opt out if they no longer feel comfortable,” he says.

TECHNOLOGY SAFEGUARDS

Still, the idea of establishing clear consent guidelines often seems far-fetched. While some leeway is given due to the technology’s relative newness, it is difficult to imagine systems capable of effectively moderating the sheer volume of content produced daily through generative AI, and this reality is echoed by industry leaders.

In May, speaking at an event promoting his new book, former UK deputy prime minister and ex-Meta executive Nick Clegg said that a push for artist consent would “basically kill” the AI industry overnight. He acknowledged that while the creative community should have the right to opt out of having their work used to train AI models, it is not feasible to obtain consent beforehand.

Michael Mosaad, Partner, Enterprise Security at Deloitte Middle East, highlights some practices being adopted for generative AI models. 

“Although not a mandatory requirement, some Gen AI models now add watermarks to their generated text as best practice,” he explains.

“This means that, to prevent misuse, organizations are embedding recognizable signals into AI-generated content to make it traceable and protected without compromising its quality.”

Mosaad adds that organizations also voluntarily leverage AI to fight AI, using tools to prevent the misuse of generated content by limiting copying and inserting metadata into text. 

Expanding on the range of tools being developed, Beidas says, “Some systems now attach content credentials, which act like a digital receipt showing when and where something was created. Others use invisible watermarks hidden in pixels or audio waves, detectable even after edits.”  

“Platforms are also introducing their own labels for AI-generated material. None of these are perfect on their own, but layered together, they help people better judge what they see.”

GOVERNMENT AND PLATFORM REGULATIONS

Like technology safeguards, government and platform regulation are still in the air. However, their responsibility remains heavy, as individuals look to them to address online consent violations.

While platform policies are evolving, the challenge is speed. “Synthetic content can spread across different apps in seconds, while review processes often take much longer,” says Beidas. “The real opportunity lies in collaboration—governments, platforms, and the private sector working together on common standards such as watermarking and provenance, as well as faster response mechanisms. That is how we begin to close the gap between creation and enforcement.”

However, change is underway in countries such as Qatar, Saudi Arabia, and the UAE, which are adopting AI regulations or guidelines, following the example of the European Union’s AI Act.

Since they are still in their early stages, Lozan says, “a gap persists in practically supporting organizations to understand and implement effective frameworks for identifying and managing risks when developing and deploying technologies like AI.”

According to Jouzdani, since the GCC already has a strong legal foundation protecting citizens from slander and discrimination, the same principles could be applied in AI-related cases. 

“Regulators and lawmakers could take this a step further by ensuring that consent remains relevant not only to the initial use of content but also to subsequent uses, particularly on platforms beyond immediate jurisdiction,” he says, adding the need to strengthen online enforcement, especially when users remain anonymous or hidden.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS