• | BCG

What are the ethical implications of Gen AI?

Gen AI's potential lies in striking a balance between innovation and ethical responsibility

What are the ethical implications of Gen AI?
[Source photo: Anvita Gupta/Fast Company Middle East]

40 years ago, the internet fundamentally changed the way the world functioned. Various breakthroughs were built upon it – e-commerce, smartphones, blockchain, 3D printing, augmented reality, quantum computing, and, of course, artificial intelligence.

By the end of 2022, artificial intelligence-powered generative AI has been released to the public, a tech that blurred the line between human-generated and machine-generated output. But just like any groundbreaking innovation, it brought its share of ethical concerns.

A REVOLUTIONARY CHANGE IN TECHNOLOGY

Gen AI blurs the line between human and machine-generated content through Large Language Models. Its growing “context awareness” delivers contextually relevant responses, fostering human-like interaction.

Beyond text, Gen AI’s multimodal skills create images, videos, and audio comparable to human works, reshaping our perception of AI’s creative potential.

Amidst this shift, a new survey conducted by BCG and the MIT Sloan Management Review reminds us of the elusive balance required to navigate the ethical implications of AI, especially Gen AI. It emphasizes the urgent need to implement regulatory guidelines to steer innovation while safeguarding human rights.

For the third consecutive year, BCG conducted a global survey to gain insights into how companies handle the delicate balance between advancing AI technology and upholding ethical considerations. The latest survey, conducted after ChatGPT’s rising popularity, reveals an improvement in the average maturity of responsible AI (RAI) practices from 2022 to 2023. There’s an encouraging development: the percentage of companies deemed leaders in responsible AI has nearly doubled, rising from 16% to 29%.

However, these enhancements are not enough, given the rapid advancement of AI technology. Both private and public sectors need to integrate responsible AI practices alongside their AI development and deployment efforts.

Companies with active CEOs in responsible AI are well-prepared for AI regulation, with 79% reporting readiness, compared to 22% when CEOs are less involved. Engaged CEOs sustain investment and clarity, leveraging their influence, similar to how they impact cybersecurity and ESG initiatives.

THIRD-PARTY AI TOOLS

Organizations increasingly rely on third-party AI tools, including generative AI algorithms like GPT-4, Dall-E 2, and Midjourney. However, these tools carry significant risks. They contribute to 55% of AI failures, ethical or technical lapses that can expose companies to regulatory, PR, or other issues.

Being vigilant can help here. The survey adds that the more a company analyzes third-party tools, including vendor certification and audits, the more AI failures they discover. Unfortunately, companies are doing too little preventive oversight. Two-thirds—68%—perform three or fewer checks on third-party AI solutions, leading to a higher failure rate.

While companies can depend on their existing third-party risk management protocols to assess AI vendors, embracing AI-specific strategies, such as audits and red teaming, is now necessary. These approaches must be flexible due to AI’s constant and rapid evolution.

RESPONSIBLE AI NEEDS TIME

From the beginning, generative AI has democratized access by making it easily available for all employees, not just big tech. This access exposes the challenges of shadow AI: unauthorized usage and development hidden from the company’s management and governance.

Responsible AI needs to be built into the fabric of the organization. Just like cyberattacks, human error often underlies AI failures. Improving responsible AI awareness and measures requires changes in operations and culture. These changes take at least three years to implement, implying that companies must act swiftly.

Failing to integrate responsible AI practices into an organization’s operations, especially with the advent of Gen AI, poses numerous risks. Traditional concerns like bias introduction may lead to litigation and trust erosion. Non-compliance with AI regulations can result in legal liabilities. Gen AI brings new challenges, including copyright infringement due to the lack of control of input. Sensitive information leaks from data used for fine-tuning or prompt engineering, and the potential for misleading outputs increases corporate risk. These risks are aggravated by the use of third-party GenAI software. “Shadow AI,” used by employees without policy adherence, compounds these challenges.

The BCG report stated the need for responsible AI programs to enhance their ability to monitor and mitigate the risks associated with third-party AI use. Additionally, these programs should be adaptable to the ever-changing technical landscape of AI. This stage calls for a strong RAI (responsible AI) framework investment.

CEOs also play a pivotal role in fostering responsible AI practices within their organizations, especially as generative AI rapidly evolves, emphasizing the need for ethical technology usage.

Recognizing responsible AI as a critical strategic capability, CEOs must shape the RAI agenda, acknowledging the time required for RAI program maturity. To effectively guide their organizations toward responsible technology use, CEOs should focus on five key points: demonstrating visible support for RAI implementation, collaborating on a value-aligned RAI strategy, appointing a senior RAI leader, and proactively addressing RAI concerns. In essence, CEOs are central to championing responsible AI practices and embedding ethical technology use across all aspects of operations.

WHAT SHOULD WE DO?

Many countries, including local jurisdictions, are contemplating AI regulations. Industries under heavy regulation, such as healthcare, financial services, and the public sector, report superior RAI maturity, marked by risk management and fewer failures in AI. But treating AI only as a compliance problem misses the mark.

The AI landscape is rapidly shifting, elevating generative AI’s adoption and risks. While shadow AI surprises companies, regulators heightened scrutiny. Strengthening responsible AI is crucial, and responsible AI leadership hinges on responsible practices and not just AI expertise.

ABOUT THE AUTHOR

FastCo Works is Fast Company's branded content studio. Advertisers commission us to consult on projects, as well as to create content and video on their behalf. More

More Top Stories:

FROM OUR PARTNERS