- | 9:00 am
This is how you can tackle AI-fueled war misinformation
Experts say that staying alert is the best way to fight misinformation. The most effective method is to carefully consider the content we come across.
As the fallout of the US and Israel’s war with Iran reverberates across the GCC states, another issue is unfolding online: a barrage of misinformation.
“AI has made it much easier to produce convincing false content. Which means misinformation and disinformation can spread very fast, particularly during a crisis,” says Javvad Malik, lead CISO advisor at KnowBe4.
Authorities in countries such as the UAE are taking the issue head-on, imposing penalties including imprisonment and fines of around $54,000 for those who spread rumors and false information online.
Despite these warnings, rumors continue to spread on social media. Viral content often includes dramatic clips that are manufactured, repurposed, or completely fake.
Fact-checkers have already flagged several misleading posts that are gaining traction online amid the ongoing crisis. Fabricated videos have run rampant on several platforms, depicting the popular tourist destinations in a negative light, AI-generated visuals of attacks, and decade-old conflict footage recirculated as if it were current.
Even satellite imagery has been pulled into the misinformation cycle. In many cases, the misinformation spreads far faster than the corrections.
DEEPFAKES AS A TOOL OF INFORMATION WARFARE
Adding to the growing problem, researchers studying AI and media manipulation say this moment shows a bigger change in how information warfare happens online. Talal Shaikh, associate professor of AI and robotics at Heriot-Watt University Dubai, explains that deepfakes and generative AI have become powerful tools for shaping stories during geopolitical crises.
“Deepfakes and generative AI have become powerful tools for information warfare, particularly across the Middle East, where conflicts already carry intense emotional weight,” Shaikh says. “A single fabricated video can inflame tensions, erode trust in legitimate reporting, and shape international opinion within hours.”
He adds that what’s especially worrying is how fast this content spreads online. “We are no longer dealing with crude propaganda. AI-generated content now looks increasingly convincing, making it harder for ordinary citizens and even journalists to distinguish real footage from manufactured narratives.”
WHY DETECTION IS GETTING HARDER
We’ve always known that things aren’t always what they seem, and not everything we see is true. But today’s technology makes fake information very convincing. Malik points out that the fast pace of innovation is making it harder to detect fakes using just technical methods.
“While many efforts are being made to analyze images, audio, and videos through technical means, or by looking for ‘tells’, the rate at which the technology is accelerating makes it very difficult,” he says.
Shaikh notes that although certain visual cues can still appear in manipulated media, they are becoming less reliable as AI systems improve.
“Watch unnatural facial movements, particularly around the eyes and mouth, where lip-sync with speech can break down,” he adds. “Lighting may appear inconsistent, with shadows falling in conflicting directions. Hands and fingers sometimes appear distorted, and background details like text or architecture may warp.”
But visual inspection alone is no longer enough. Verifying the source and context of footage, he says, is often far more reliable.
HOW TO SPOT RED FLAGS IN WHAT YOU SEE
Experts agree that paying attention is the best defense against misinformation. “A human-centric approach is most beneficial, where people are mindful of the content they are exposed to,” Malik says.
“People are more vulnerable to deception when they are in a heightened emotional state, such as being frightened, angry, or shocked. So people should be wary of content that provokes an immediate emotional reaction and be sceptical.”
Shaikh suggests a simple checklist called the “STOP” approach to use before sharing any conflict-related content.
Source: Check who originally posted it and whether credible outlets have corroborated it.
Timeline: Search for the same footage predating the claimed event, as old clips are frequently recycled.
Origin: Use reverse image search to trace where the content first appeared.
Plausibility: Ask whether the scene makes logistical and contextual sense.
“In a region where misinformation can have real-world consequences, taking 30 seconds to verify before sharing is not just good practice but a civic responsibility,” he adds.
TOOLS THAT CAN HELP VERIFY CONTENT
Several free tools can help people check suspicious media. Shaikh mentions visual search tools like Google Lens and TinEye, as well as verification platforms like the InVID-WeVerify browser plugin, which breaks videos into keyframes for easier analysis. Extensions like Hive AI Detector can also spot possibly fake images.
Malik stresses that the most effective safeguard remains behavioral awareness.
“In terms of technicalities, people can look for inconsistencies in movement, lighting, lip sync, or audio, and check whether the same footage is being reported by credible sources,” he says. “Free tools such as reverse image search, keyframe extraction, and basic forensic platforms can help, but the most important protection is still pausing before sharing and asking who benefits if this turns out not to be true.”
In a digital environment where content travels instantly, that pause may be one of the few remaining buffers between misinformation and mass amplification.






















