- | 3:32 pm
The Middle East’s next AI challenge? Governance and alignment
Fast Company Middle East’s Agentic AI subcommittee, in partnership with Roland Berger, convened regional leaders to examine the governance, ethics, and alignment challenges shaping the rise of autonomous AI
As artificial intelligence moves from a supportive tool to an autonomous decision-maker, Agentic AI is reshaping how organizations strategize, operate, and govern. The shift is accelerating in the Middle East, where digital transformation is unfolding at a rapid pace, and the economic, social, and ethical implications are substantial.
To understand what this evolution requires, the Fast Company Middle East Impact Council, in partnership with Roland Berger, convened an Agentic AI subcommittee comprising senior leaders from government, technology, and strategy.
Their discussions began with the technology itself. TechTarget describes agentic systems as those that “orchestrate multi-step tasks and make decisions with minimal human intervention,” a capability that delivers efficiency gains while introducing new forms of risk.
“The roundtable focused on how the autonomous and agentic AI will shift from pilots to trusted decision-makers in KSA by 2026, driving efficiency and innovation—provided governance keeps pace with ambition,” said Nizar Hneini, Managing Director, Head of Digital and Services, Roland Berger Middle East.
This governance challenge became a central theme for the subcommittee.
According to a report by Roland Berger, as systemic risks like climate change and biodiversity loss grow, agentic AI, built on multi-agent systems, is emerging as a transformative solution. Unlike traditional AI, it delivers holistic insights across climate, politics, supply chains, and markets, helping organizations navigate interdependent challenges and cascading effects. Industry investment is rising, with tech giants projected to spend $60–80 billion on AI by 2025 and the green tech market expected to triple to $73.9 billion by 2030.
Academic research adds another layer, highlighting the risk of “goal drift,” in which autonomous systems pursue unintended objectives as they optimize for performance.
Security and data quality emerged as equally urgent priorities. Council experts stressed that agentic systems are only as dependable as the information they process. Weak or inconsistent data can lead to flawed decisions that scale rapidly once executed by autonomous systems. The risk grows further as these agents interact with external APIs, creating a wider attack surface and increasing exposure to potential vulnerabilities.
To navigate these challenges, the subcommittee brought together leaders from institutions such as NEOM, Misk Foundation, Zakat, Tax and Customs Authority, Cenomi, Salam, and Roland Berger. Their collective insights are helping shape a regional blueprint for responsible AI autonomy. Their work reinforces a clear message: as AI gains agency, organizations must advance just as quickly to guide, govern, and align these systems with human values and long-term societal priorities.























