- | 8:58 am
Can AI be the great equalizer in the Middle East? And what are the risks?
Experts discuss job loss, bias, reskilling and equitable access

It was the best of times; it was the worst of times. It was the age of AI joining us at the table, reshaping how businesses operate, from education and healthcare to agriculture. It was the epoch of investors; it was the epoch of doubters. It was the spring of transforming productivity and savings with AI-driven automation; it was the winter of human labor getting cut extensively and dividing the best from the rest.
It was when Jensen Huang envisioned a future where workers “are all going to be CEOs of AI agents,” that Sam Altman warned that “very subtle societal misalignments” could make AI systems wreak havoc. Satya Nadella called for a long view of AI’s development—if AI is allowed to flourish, more profound use cases are on the horizon. “If you can fundamentally accelerate science,” he said, that could mean new cures for diseases and new ways to help transition away from fossil fuels.” It was when Big Tech took turns playing good cop and bad cop.
This may be a tale of two cities, but it’s hardly a Dickensian story.
Of late, wariness has grown alongside excitement about AI. Among the top hazards in the World Economic Forum’s Global Risks Report, “Adverse outcomes” from AI rank alongside extreme weather events and armed conflict.
Summing up how AI will affect our lives aptly, Joseph Stiglitz, the Nobel laureate and former chief economist at the World Bank, said, “Artificial intelligence and robotisation have the potential to increase the productivity of the economy and, in principle, that could make everybody better off.” “But only if they are well managed.”
Over the past few years, AI tools have become more accessible, unlocking opportunities in education, healthcare, and the economy. But while the promise of AI is undeniable, the divide between those who thrive and those left behind in the workforce is growing.
“The question isn’t whether we should address these risks — it’s how urgently we’ll act before this polarization deepens,” says Joe Devassy, Director, Strategic Alliances, KPMG Lower Gulf.
Sure, there are countless pitfalls to avoid on the way.
JOBS WILL CHANGE
While it is unclear how the shifts and changes will affect us, there are concerns about the loss of jobs, as AI has already replaced many workers. In recent years, some tech firms, including file storage service Dropbox and language-learning app Duolingo, have cited AI as a reason for making layoffs.
“That is a real tension,” says MBZUAI’s Prof. Elizabeth Churchill, Department Head of Human-Computer Interaction at Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI). “The research suggests some jobs will change and new roles will emerge.”
Changes will play out differently for different occupations. According to the World Economic Forum Future of Jobs Report 2025, about 41% of employers intend to downsize their workforce as AI automates certain tasks. The report spotlights the job market transformation, driven by technological advances and economic and demographic shifts.
“From the report, it seems clear that the jobs set to be created are equivalent to 14% of today’s employment, while 92 million roles will be displaced by these same trends,” says Prof. Churchill. Although that sounds harsh, it also means, she says, there will be a net employment increase of 78 million jobs. “We all need to consider what those are and how we can be part of that shift and the transformation.”
WORSENS EXISTING INEQUALITIES?
Armed with AI, more insidious forces are at play. Apart from very deep issues about data privacy and exploiting customers, there are concerns that AI could perpetuate or worsen existing inequalities in employment and other areas if systems are trained on biased data or programmed with discriminatory algorithms.
“They won’t just reflect existing inequalities — they’ll amplify them,” says Devassy. “This is especially critical in areas like employment, hiring, and diversity initiatives, where unchecked AI risks reinforcing systemic biases.”
There has been much discussion of the potential biases in AI algorithms, especially when screening applicants for jobs. However, Peter Zemsky, Professor of Strategy at INSEAD, says AI is an even bigger source of inequality. “Just like digital technologies in the last decade, AI can make the top talent in the economy even more productive and able to capture even more income and wealth.”
“Those with the most human and social capital will be best positioned to work hard to secure the highly paid roles,” says Zemsky, but he worries that those in the “middle will see fewer opportunities, and have less incentive to invest in themselves, leading to further amplification of inequalities.”
When it comes to representativeness in AI systems, Prof. Churchill says one way to go about it is by focusing on developing open-source models and working on large language models (LLMs) that more effectively reflect underrepresented languages, which is what MBZUAI has been doing. The university has built advanced open-source models such as Jais (Arabic LLM) and Nanda (Hindi LLM) and uses ethnographic approaches to understand AI systems development.
“This addresses the issue of bias and the importance of cultural nuances, looking at how AI system builders and data scientists conceptualize and operationalize diversity and bias, and how they inspect the systems they are building for bias.”
“Our faculty and researchers also contribute to vital datasets for training in other languages, such as Emirati (Arabic) and Indonesian. Non-English speakers must have access to such technology,” she says, adding that women account for 31% of students in the university.
Beyond the impact of AI on work, AI can be expensive to develop and implement, and many worry this exacerbates existing socio-economic inequalities and creates new ones based on technological access.
However, socio-economic inequalities exist, whether they are AI-driven or not. The speed of change is an issue with AI-enabled technologies, platforms, and services. “It is hard to predict how the disparities will play out,” says Prof. Churchill.
Digital inclusivity is increasingly important as the pace of technical development grows, led by innovations in AI.
“At first glance, developing and deploying AI can be costly, which risks deepening the digital divide,” says Devassy. “But here’s the good news: AI is rapidly democratizing. New players like Deep Seek and others are driving down costs, offering AI tools to a broader audience, and lowering the barriers for small businesses and individuals.”
As access to the technology becomes ubiquitous, Mouna Essa-Egh, VP Middle East & Africa, IT Division, Schneider Electric, says it is already seeing development and operational costs reduced.
She adds that some businesses have already reported successfully running AI workloads and development on commodity hardware and previous generations of GPUs.
“The combination of the scale of the hyperscale operators, and then the ability for smaller groups to leverage lower cost hardware is likely to produce a wider range of affordable and accessible services that will mean almost anyone who wants it can access AI, or AI supported services.”
CONFRONT POTENTIAL PITFALLS
Experts say that while AI holds promise, it’s essential we confront its potential pitfalls head-on, which may lead to polarization.
“When it comes to polarization, personalization of AI tools must be cautiously approached,” says Prof. Churchill.
If we continue to treat personalization—whether in language models, recommendation engines, or information delivery—as an unquestioned good, we risk deepening divides. For example, she says, many AI tools are heavily optimized for English and other major languages, often leaving low-resource language communities underserved.
“This imbalance can unintentionally reinforce cultural silos and marginalize diverse perspectives,” she says.
While AI systems and tools are becoming accessible, Prof. Churchill emphasizes that the challenge extends beyond their availability; it involves whether individuals are willing to utilize them outside their comfort zones.
“There are real opportunities to broaden inclusivity, to design AI that serves diverse communities, and to build systems that encourage dialogue rather than division. These paths need to be continually explored,” she adds.
CLOSING THE GAP
No doubt, demand for roles that utilize AI tooling, big data specialists, and machine learning specialists is already increasing rapidly, but one thing is clear: as AI becomes ubiquitous, the gap between opportunity and despair will widen in the workforce, especially for those without pathways to reskill.
“The future of work is already here — the question is whether we’ll equip people to meet it or watch as the opportunity gap grows wider,” says Devassy.
Organizations must take deliberate, hands-on steps to close this gap. Devassy recommends launching pilot programs that allow employees to experiment with AI tools in low-risk environments. This should be paired with targeted training sessions on AI’s capabilities and implications, and identifying internal champions to generate grassroots enthusiasm.
Leadership should treat AI adoption as a business imperative by creating sandbox environments for testing AI solutions. This will enable teams to evaluate ROI and assess workflow improvements. Additionally, ensuring ethical and equitable use of AI and having internal compliance teams that guide responsible integration and protect against bias and misuse is crucial.
WHAT CAN GOVERNMENTS DO?
Experts agree that governments must also lead the way with reskilling initiatives and AI education to shape the future workforce.
The UAE, which aims to be an AI hub, is setting the pace with initiatives like making AI a formal part of the public school curriculum, One Million Arab Coders to future-proof digital skills at scale, and appointing chief AI officers across government departments to drive adoption.
“These bold, future-facing moves show why government leadership is crucial for ensuring no community gets left behind in the AI age,” says Devassy.
The country also tackles AI bias head-on. The UAE’s AI Charter Principle No. 3 explicitly calls for eliminating algorithmic bias and ensuring fairness in AI decision-making. Devassy says, “It is a proactive stance showing how governments and private sectors can partner to make AI a force for inclusion, not division.”
Public-private partnerships are also vital in education to help students experience real-world challenges. For this, Prof Churchill says, MBZUAI has formed numerous strategic partnerships with organizations across a range of sectors and has trained more than 1,000 local senior leaders, equipping them with AI skills and knowledge to inspire and execute AI-led transformations.
Zemsky says that the government has a critical role in guiding AI’s positive and negative impacts, and must encourage investment in AI applications to bring inclusivity. “What is critical is to invest in AI applications to improve human learning, to make people able to reskill and upskill faster. Done well, these applications can foster inclusion and provide hope.”
While governments need to lead on skills, Essa-Egh says they can only do so effectively when supported by academia and industry. “This triple helix approach has been proven in many spheres to be the most effective way to ensure a developmental pipeline of people with the skills to support economies and societies, far beyond technology. This approach has succeeded in pharmaceuticals, finance, healthcare, and more.”
“Indeed, it has also been developed into a quadruple helix to include community, as cross and upskilling become features to allow people to either return or join the workforce, or be more inclusive for differently abled people within it.”
ENSURING EQUITABLE ACCESS
While widespread adoption is accelerating, multiplying opportunities and risks, the real challenge, Devassy says, is ensuring equitable access and that societies are equipped to handle both the benefits and the disruptions.
Highlighting that AI is just another technology, like cloud computing or 5G, that while transformative, must be learned, explored and evaluated, Essa-Egh says, “We work to ensure that people can get that exposure, whether that be in schools and communities, or through higher education and professional work, to ensure that there is access for anyone who desires it. We ensure that the broadest range of people can enjoy the benefits of these transformative technologies.”
Prof. Churchill explains how smartphones spread rapidly across low-income countries, creating new opportunities for disruptive business ideas and models. Smartphones also championed financial inclusion, especially among women, and expanded access to education and healthcare.
Might AI do the same if made accessible, affordable, and responsible? Prof. Churchill says, “I don’t believe the choice is between AI and inequality; it’s whether we are willing to shape the AI future to serve all.”