• | 11:00 am

The Middle East conflict shows the AI era of misinformation has arrived

Addressing misinformation requires public awareness, collaboration, strong cybersecurity, and staff training

The Middle East conflict shows the AI era of misinformation has arrived
[Source photo: Krishna Prasad/Fast Company Middle East]

During periods of geopolitical tension, misinformation and inaccurate reports often increase. In such uncertain circumstances, the rapid circulation of unverified information can contribute to the spread of misleading narratives, especially when verification processes struggle to keep pace with the speed of digital platforms.

This issue has been further amplified by the development of technologies such as generative AI, which have increased both the volume and complexity of content circulating online. The generative AI market is projected to grow by 560% between 2025 and 2031, reaching $442 billion. Concurrently, 46% of fraud experts report encountering synthetic identity fraud, 37% have observed incidents involving voice deepfakes, and 29% have reported cases involving video deepfakes.

THE RISE OF AI AND FAKE NEWS

According to Maher Yamout, Lead Security Researcher at Kaspersky, periods of geopolitical conflict are typically accompanied by a surge in cyber threats across multiple fronts.

“During geopolitical conflicts, cyber threats tend to increase across several fronts,” Yamout explains. “State-sponsored cyber operations — including advanced persistent threat (APT) groups and hacktivists — often become more active, targeting government networks, critical infrastructure, and high-value industrial or financial systems to conduct cyber espionage or cause disruption.”

He adds that instability also creates opportunities for cybercriminals seeking to exploit heightened public attention and anxiety. “Opportunistic attackers and scammers frequently take advantage of such situations by launching conflict-themed phishing campaigns and scams designed to deceive users into revealing sensitive information such as login credentials, financial details, or other personal data,” Yamout says.

Alongside these threats, attempts to shape public narratives online are also becoming more sophisticated. “At the same time, we may see efforts to spread AI-generated misinformation, fake news, and fabricated videos aimed at influencing public perception and deepening divisions,” he notes.

Yamout also highlights the role artificial intelligence plays in amplifying fake news, noting that cybercriminals increasingly leverage the technology to scale and refine fraudulent activities.

“AI tools enable attackers to quickly generate highly convincing phishing content across multiple formats, including fraudulent emails, SMS and messaging app scams, voice-based scams using AI-based voice cloning, fake customer-support calls, and deceptive social media messages.”

According to Yamout, the accessibility of generative AI tools is accelerating both the scale and sophistication of these threats. “Generative AI is dramatically increasing the scale and sophistication of scams and fake news because these tools are now accessible to a wide audience, including cybercriminals,” he says. “AI enables attackers to quickly craft grammatically perfect phishing emails, create convincing fake websites, and generate deepfake audio or video to impersonate trusted individuals.”

He notes that the widespread availability of such tools lowers the barrier for attackers. “With these tools becoming easier to use and more widely available, even less technically skilled attackers can launch highly effective campaigns,” Yamout explains, adding that this significantly amplifies the reach, personalization, and potential financial and reputational impact of malicious activities.

He also points to the growing prevalence of fake news articles, clickbait headlines, and realistic AI-generated images, audio, and videos that can manipulate public opinion and circulate misleading narratives, often designed to drive traffic to fraudulent websites.

“Generative AI also allows scammers to personalize attacks at scale, making them more believable and harder to detect. As a result, misinformation campaigns and fraud schemes can spread faster and reach wider audiences,” he adds, emphasizing the increasing need for stronger digital literacy, robust cybersecurity practices, and advanced security solutions capable of detecting and blocking AI-assisted threats.

NAVIGATING MISINFORMATION

Yamout believes that both individuals and organizations have a responsibility to adopt a cautious, structured approach when assessing the credibility of online information, particularly during periods of geopolitical tension, when misinformation campaigns tend to intensify.

“Verifying content before sharing it can significantly reduce the spread of false or manipulated information,” he says, outlining several steps users can take when evaluating information online. These include checking the source and author, cross-verifying claims with multiple reputable outlets, looking for signs of manipulation, verifying the authenticity of images and videos, using reliable security solutions, and promoting stronger digital literacy.

For governments, Yamout emphasizes that collaboration with the private sector is essential to ensure accountability and disseminate accurate information. This can be achieved by enhancing citizen digital literacy, strengthening cybersecurity solutions, working closely with technology platforms, and launching public awareness campaigns.

Yamout also highlights the critical role of media organizations and social media platforms in limiting the spread of misinformation, particularly during periods of geopolitical tension.

“By collaborating with trusted cybersecurity partners and leveraging expertise in content, technology, and threat intelligence, governments and organizations can better detect and mitigate malicious campaigns,” he says.

He highlights the importance of threat intelligence sharing, noting that coordinated efforts can help track emerging scam tactics, identify malicious domains, and detect AI-generated content used in fraudulent campaigns.

“Journalists and platform teams should be equipped with the skills needed to recognize and respond to scams and misinformation,” he adds, pointing to the importance of staff training, rapid reporting mechanisms for suspicious activity, and automated monitoring systems to help detect and limit the spread of malicious content.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS

retail world forum & awards
retail world forum & awards