• | 10:00 am

Are businesses in the Middle East doing enough to build trustworthy AI?

Experts say a well-planned strategy is crucial when implementing AI technology.

Are businesses in the Middle East doing enough to build trustworthy AI?
[Source photo: Krishna Prasad/Fast Company Middle East]

There’s nothing more overhyped and less understood in the business world now than artificial intelligence (AI). While all businesses claim to be AI companies to some degree, they should be learning about the responsible use of AI and who should have a hand in building it. Companies must think deeply about the potential misuse of AI and its negative impacts. 

This includes establishing standards for data integrity and testing, among other things. All functional areas ensure they can safely stand by any AI they release. 

Like the rest of the world, countries across the Middle East invest heavily in AI, betting on technology to diversify their economies away from oil. 

A recent BCG survey states that 93% of Middle East C-suite executives plan to increase investments in AI and GenAI in 2024. The survey further revealed that over 60% of executives anticipate productivity gains of over 10% from AI and generative AI by 2024. This contrasts sharply with the global sentiment, with 66% expressing dissatisfaction or ambivalence.

This optimism translates into action, with 54% of Middle Eastern executives reporting that their AI/GenAI efforts have progressed beyond experimentation to encompass comprehensive, scaled initiatives.

As businesses use AI solutions in some parts of their business, and as AI systems grow more complex, regional enterprises must actively ensure transparency and interpretability to secure customer trust and confidence.

“Explainable AI techniques will be instrumental, particularly in banking applications, in ensuring that decisions made by AI models are understandable and justifiable,” says Ahmed Abdelaal, Mashreq’s Group CEO.

He adds that implementing AI ethics and responsible AI practices to address concerns related to fairness and accountability is being acknowledged as paramount.

According to the 2024 Edelman Trust Barometer: Insights for Tech supplemental report, AI is at a critical crossroads. Globally, 30% of respondents embrace the innovation, while 35% reject it. 

In such a scenario, Dr. Abdulla Al Shimmari, National Experts Program Fellow, Founder, and Chief Executive Officer of HCMS.ai, says, “It is crucial to provide compelling reasons for more individuals to transition from rejection. Among those who struggle to adopt AI, their top reasons cited are privacy concerns, revealing a growing need to address trust in AI.”

TRANSPARENCY AND ACCOUNTABILITY

Experts say AI needs to be built and deployed to promote transparency, accountability, and auditability, accelerating the adoption of trustworthy and responsible AI.

Businesses must learn to operationalize AI principles, implement tools to detect risks and biases and create a cohesive process to address potential harms.

A well-planned strategy is crucial when implementing AI technology since AI’s potential applications are vast, and the key is to emphasize the company’s short- and long-term benefits.

According to Alex Zhavoronkov, founder and CEO of Insilico Medicine, there are several best practices for adopting trustworthy AI. “First, develop AI to be responsible, trustworthy, and interpretable. Second, test, validate, and publish the testing results in highly peer-reviewed journals and at top AI conferences. Third, test the AI system using a community of expert users who can assess the output.”

“We must also encourage openness and knowledge-sharing to spur independent and wider research and development, which in turn feeds into AI design and development and consequently leads to safer and cheaper products offering accountability and agency to consumers,” adds Abdelaal. By working with governments and industry bodies, we can work towards the development of clearer standards that mitigate the risks and vulnerabilities of AI.”

Additionally, raising awareness about the benefits and risks of AI through public forums and educational campaigns, which helps build public trust, is essential, says Dr. Abdulla Al Shimmari. Developing certification programs for AI technologies and practitioners can set benchmarks for quality and ethical standards, reassuring users about the reliability and integrity of AI systems.

Enterprises in the region are making significant strides in securing trust and confidence as they invest heavily in AI, says Hassan Alnoon, CEO of Multiverse Innovation Consultancy and CTO of BOTIM. “A mass majority of the UAE enterprises prioritize transparency and open communication about their AI initiatives.”

In the region, Dr. Al Shimmari says the UAE remains dedicated to promoting an inclusive AI environment. The launch of the Building Responsible Artificial Intelligence forum exemplifies its drive to build trust in AI while uniting technology leaders from across the globe to discuss collaboration and the future of responsible artificial intelligence.

STRENGTHENING AI LEADERSHIP

At a time when the region is trying to earn trust in its AI capabilities, there is a consensus on creating responsible AI standards and strengthening regional AI leadership.

“We need to invest in a robust workforce skilled in AI, in education and training to ensure we remain resilient and in a strong position to drive AI innovation and adoption,” says Abdelaal. We must also invest in partnerships with world-leading technology providers to ensure we continue to provide a best-in-class customer experience.”

Alnoon also agrees that collaboration will be key to success. “By forging such partnerships with leading AI companies, research institutions, and governments globally, we can accelerate knowledge sharing, technology transfer, and access to best practices.”

“An example of this is the strategic partnership between G42 and OpenAI. This partnership aims to develop cutting-edge AI technologies and solutions and strengthen the UAE’s position as AI leader in the region,” he adds.

All agree that a comprehensive approach that focuses on building local capacity, fostering innovation, and promoting responsible AI development is crucial to strengthening AI leadership in the region. “This involves investing heavily in AI research and development, establishing world-class research centers and labs, and attracting top talent from around the world,” adds Alnoon.

Although establishing a global leadership position in AI is challenging, Zhavoronkov says, “The leadership of the key countries should consider looking one or several steps ahead to play the long game in AI and focus on AI for drug discovery, material science, petrochemistry, gas chemistry, and agriculture. These areas require more sophisticated AI systems than large language models trained on a lot of data.”

“Also, in drug discovery, there may be a chance to discover new drugs that will help patients worldwide even when many AI bubbles burst. We are betting on these areas in the region,” he adds.

On July 2, the Fast Company Middle East’s Impact Council Artificial Intelligence subcommittee will meet to discuss the significant trends in the field. This subcommittee’s partner is Boston Consulting Group, which has renewed its collaboration this year to deliberate on various AI topics and evaluate the technology with a more sober perspective—ensuring ethical and beneficial AI implementation.

“Reflecting on the insights and progress from previous editions, we are committed to a deeper exploration of AI’s role in addressing complex regional and global challenges as we help guide the conversation towards impact, innovation, and sustainable growth for our Middle East players,” Dr. Akram Awad, Partner at BCG said about the AI subcommittee. 

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter