- | 9:00 am
Is the gender gap holding back the Middle East’s AI innovation?
Beyond which jobs are most at risk, researchers are increasingly focusing on how AI technologies are developed—and who they serve.

As AI reshapes industries in the Middle East and beyond, the question is no longer if it will transform work—but who it will leave behind. In a region where women are mainly limited to administrative, clerical, and service roles, the risk of gendered displacement is significant—and rising.
Recent data from UNESCO suggests that up to 40% of women’s jobs in the Arab world could be affected by AI if strategic reskilling isn’t urgently implemented. But even more worrying is the invisible threat: AI systems, if left unchecked, may reinforce existing gender biases, not just in who gets replaced, but in whose ideas are heard—and whose problems get solved.
AUTOMATION TARGETS FEMALE-DOMINATED SECTORS
Women in the region are employed in roles already targeted by automation. Sidra Khan, Director and AI Automation Specialist at Connect AI Solutions, says, “AI and automation are expected to heavily impact administrative and clerical work, retail, and customer service”. These are the industries predominantly staffed by women due to “historical gender norms, caregiving responsibilities, and educational pathways,” she adds.
Jessica Constantinidis, Innovation Officer – EMEA at ServiceNow, echoes this, adding that “many women hold positions in administration, education, healthcare, and service sectors—fields that are already undergoing significant transformation due to automation, chatbots, and AI tools.”
This structural risk is compounded by limited digital participation and barriers to reskilling. “Women are overrepresented in clerical, administrative, and service roles—areas most likely to be automated,” says Khan. She highlights additional obstacles such as occupational segregation, part-time or informal employment, and restricted access to upskilling programs in data and AI.
BIAS IS BUILT INTO THE TECH AND THE PROCESS
Beyond which jobs are most at risk, researchers are increasingly focusing on how AI technologies are developed—and who they serve. Michaël Bikard, Associate Professor of Strategy at INSEAD, warns “AI has been trained on data that reflects the present—and the present is not neutral. To the extent we don’t try to correct it, AI will simply reinforce existing societal biases.”
His concern is backed by data. In a large-scale study analyzing over 10 million scientific papers and their citations in patents, Bikard and his co-authors found that scientific ideas authored by women are consistently cited less often in technological inventions than those authored by men. This gap persists even in a controlled sample of “paper twins”—scientific discoveries published simultaneously by male and female researchers on the same topic—revealing that the disparity is not about the quality or timing of the work.
The researchers investigated whether the gap stemmed from supply-side factors (such as women’s professional standing or access to resources) or demand-side mechanisms (such as bias in how ideas are evaluated). The findings strongly aligned with the latter: inventors paid more attention to and rated scientific abstracts more positively when they believed they were written by men.
“We found no clear evidence that this gap was due to differences in how women work or network,” Bikard explains. “Rather, it reflects a deep-seated bias in who gets listened to—and whose ideas are considered valuable.”
REPRESENTATION WITHOUT RECOGNITION ISN’T ENOUGH
The implications go far beyond citation statistics. As Bikard points out, “There’s a clear risk that problems that affect only women are less likely to be solved because women are underrepresented in AI innovation. Even when women are in the room, we’re less likely to listen to their ideas.”
This systemic undervaluation is compounded by structural inertia. “We all take shortcuts in how we process information,” Bikard says. “If your brain is used to thinking the kind of person who comes up with a good idea looks like a man, you’ll subconsciously give more weight to that idea—even before you’re consciously aware of it.”
In effect, the competition among ideas is not entirely merit based. Gendered status beliefs determine which ideas move forward, especially in male-dominated fields.
THE REGIONAL RISK OF REINFORCING GLOBAL PATTERNS
This bias has implications for the region’s digital future. While it has made strides in improving female education in STEM, women continue to be underrepresented in leadership roles within AI and tech development.
As AI becomes embedded in everything from public services to industrial systems, the lack of diversity among innovators may translate into skewed priorities and missed opportunities.
“We need to be honest that representation alone won’t fix this,” says Bikard. “You could have gender balance among AI professionals and still end up with systems that ignore or undervalue women’s contributions—unless we consciously intervene at every step of development and evaluation.”
RETHINKING WHO BUILDS AND BENEFITS FROM AI
Anush Prabhu, Founder and Managing Director at Mantaga, sees the risk not in automation but in its implementation. “The real vulnerability lies not in any one job title or sector but in the failure to adapt,” he says. “Even human-centric fields like elder care and nursing are already seeing robotics and AI handle routine support functions.”
Prabhu emphasizes the opportunity to shift from job dependency to creative problem-solving. “We view AI not as a replacement, but as an equalizer—an everyday tool that empowers individuals to build, earn, and create value.”
Bikard similarly notes that AI is a general-purpose technology with the potential to be applied to virtually anything. “The question is, out of all the problems we could solve, which ones will we prioritize? That depends on who’s innovating. And if most AI solutions come from male-dominated industries or interests—like gaming or autonomous vehicles—sectors heavily populated by women may be left behind.”
SEIZING A NARROWING WINDOW FOR CHANGE
As AI rapidly transforms the future of work, society stands at an inflection point. “We’re building a new world—and this doesn’t happen often,” Bikard says. “The window is open now to ensure it’s a fairer world, but that window won’t stay open forever.”
He advocates for structural changes at three levels: ensuring AI systems are unbiased, raising awareness among users, funders, and evaluators to recognize subconscious bias; and broadening participation in innovation. “An FDA-style regulatory body for AI might help ensure we don’t release biased systems into the wild without knowing the risks,” he adds.
EQUITY AS INNOVATION STRATEGY
Constantinidis says the danger isn’t that AI will take everyone’s job—the danger is that “women’s economic agency could be eroded because we didn’t build inclusive systems from the start.”
Or as Bikard puts it: “Innovators create the world in which our kids will live. If the future is built primarily by men, it risks being a world imagined for men.”
Ensuring equitable AI adoption in MENA—and globally—requires more than just skilling. It demands a cultural and institutional reckoning with who gets to define the problems AI will solve, and whose ideas we decide are worth building on.