This is how companies can build AI governance that earns trust and drives results

Practical guidance for companies navigating the evolving world of AI

This is how companies can build AI governance that earns trust and drives results
[Source photo: Krishna Prasad/Fast Company Middle East]

In just a few years, artificial intelligence has evolved from a specialized tool into a technology that is shaping national infrastructure, commerce, law, and public policy. Its ability to operate at unprecedented scale and speed offers organizations opportunities to improve efficiency, streamline processes, and make faster, data-driven decisions.

At the same time, AI presents a paradox. While its capabilities are growing rapidly, regulatory and ethical frameworks remain fragmented, incomplete, and are still evolving. For companies in general and for those in regulated industries in particular, this creates a pressing question: how can organizations build with AI when the rules are still being written?

Roula Khaled, General Counsel at Khazna Data Centers, emphasizes that the solution lies in governance that is principled, responsible, ethical, and scalable, embedded into AI from the development stage. Such governance ensures that AI tools are aligned with human-centered principles, societal values, and fundamental rights. It also safeguards against bias, privacy breaches, and misuse of data while remaining compliant with evolving legal and regulatory frameworks, such as the OECD AI principles or the EU AI Act. By meeting these criteria, AI systems can anticipate and manage risk safely while earning legitimacy and trust.

At the same time, governance must recognize the efficiency gains AI can deliver and support organizations in harnessing its potential without compromise.

Artificial intelligence is delivering clear efficiency benefits across industries. In legal departments, for example, a 2024 study found that AI can match or exceed the accuracy of human reviewers while significantly reducing turnaround times. A 2025 survey reveals that nearly 38 percent of in-house legal teams already use AI tools, with another 50 percent exploring adoption. Similar trends are emerging in other professions as well.

Khaled captures the practical impulse behind this shift when she says, “People are using these tools, often whether they’re compliant or not, because they can get so much more work done with them.” That reality creates pressure on organizations to move quickly while remaining mindful of legal and ethical obligations.

THE COMPLIANCE CHALLENGE

Remaining compliant with AI regulations is a global concern, but the Middle East presents distinct dynamics. Khaled notes that regulatory frameworks in the region are still limited but evolving. “Rather than having one law that is rigid, we can adopt specific guidelines, especially for regulated sectors such as healthcare, financial services, education, and public enforcement,” she explains. Such guidelines could provide stronger control and stricter audits of AI tools while allowing them to remain adaptable to regional values without hindering workforce productivity.

There are several pathways to ensure compliance. Khaled points to international frameworks such as the Council of Europe’s Framework Convention on Artificial Intelligence, which has been endorsed by over 50 countries and offers inspiration for transparency, accountability, and recourse. Other models integrate ethics, risk foresight, technical governance, and strategic readiness, while some link AI risk to ESG accountability, treating responsible AI as part of a company’s social license. Public-private collaboration is also critical, as many hyperscalers have already developed their own AI codes of conduct.

BUILDING VALUES-BASED GOVERNANCE

According to Khaled, effective AI governance starts with a clear set of values. “Whichever way you choose, the approach should boil down to building a core set of values, whether that’s transparency, fairness, accountability, or anything else,” she says. From these values, organizations can develop “plug-in modules” to accommodate regional differences, such as data sovereignty or disclosure regimes, as well as exemptions for business-critical AI applications.

AI governance must remain dynamic. Khaled emphasizes that tools evolve, regulations change, and risks emerge. “An AI tool needs to be continuously monitored and updated, from development to post-deployment,” she says. Research also emphasizes the importance of ongoing validation, human-in-the-loop safeguards, and audit trails in effectively managing risk.

The AI revolution calls for a new approach to governance. Khaled emphasizes that trust must be earned through robust, adaptable frameworks that are embedded at every level of the organization. Legal teams should partner with engineers, operators, and managers to establish feedback loops that inform AI design, rather than merely responding to failures. “The values-based frameworks that this approach yields will, I’m certain, create real competitive differentiation,” she adds.

Legal clarity, ethical foresight, and operational discipline, she concludes, “should not be constraints to innovation, but they can be part of the foundation.”

ABOUT THE AUTHOR

FastCo Works is Fast Company's branded content studio. Advertisers commission us to consult on projects, as well as to create content and video on their behalf. More

More Top Stories:

FROM OUR PARTNERS