• | 8:00 am

Is AI bad for diversity?

These are the pros and cons of using algorithms to improve workplace fairness.

Is AI bad for diversity?
[Source photo: Vitalii Barida/Getty Images]

Few topics have attracted as much attention and controversy, as artificial intelligence (AI). In the last year alone we have witnessed heated reactions and fervent criticism of AI applications for self-driving carsface and voice recognition, credit scoring, and recruitment.

However, one thing is clear: AI is here to stay. As Hilke Schellmann notes in an excellent review on the subject, the rise of AI seems pretty resistant to all the negative public reaction, and occasional horror stories of algorithms gone rogue or breaking bad. So, we better find a way to work with AI, rather against it.

This does not mean giving up on trying to sanitize or mitigate the pitfalls of AI. Quite the contrary. Leaders and developers must ensure that strict regulations are in place, and that appropriate ethical guidelines are followed so that AI is adopted only for the sake of progress—by which we mean better prospects for all, especially groups and individuals who have been historically disadvantaged.

This begs the question of how AI may impact diversity, equity, inclusion, and belonging (DEIB) interventions, which represent the chief organizational effort to improve fairness at work, with the objective that disadvantaged individuals (mostly people from the outgroup) can have equal access to opportunities and are not subjected to bias or discrimination. Here, we observe an interesting paradox: the opportunity that AI could bring, versus the clear risks most people fear.

On the risk side, we see some real reasons for concern.

USING UNBIASED DATA TO OVERCOME HUMAN BIAS

Humans are biased by design. Our propensity to think fast and fill in the blanks of information by generalizing and jumping to conclusions, explains the ubiquity of biases in any area of social life.

Because no amount of unconscious bias training can make humans unbiased, it makes sense to rely on data and tech to make fairer decisions. However, AI can only be unbiased if it learns from unbiased data, which is notoriously hard to come by.

For example, for AI to effectively identify the right candidates for a job, algorithms must be fed or fueled with past data on successful candidates. In most instances, these profiles will not only consist of high-performing individuals, but also individuals who were merely designated as “high-performing” by their managers.

While in principle the formula is similar to humans training AI to label a tree or a traffic light, there is far more subjectivity and bias when it comes to designating a human as “high-performing.” This bias then transfers to AI. Until we have massive sets of clean or non-corrupted data, we can’t rule out that we only transferred structured and systemic biases from the analogue world to the AI world.

A LACK OF DIVERSE TALENT BEHIND THE DESIGN OF AI

It is rather obvious that your chances of improving diversity increase when you have a diverse team tasked with it. And yet, this is a real challenge in the space of AI.

Consider that the World Economic Forum reports that about 78% of global professionals with AI skills are male. In order to build an unbiased AI solution, the tech sector needs a wider range of perspectives and diversity of thought, particularly to gain awareness of all the potential forces contributing to the (often unwarranted) success of the elite.

This catch-22 is not to be overlooked. A fish doesn’t know what water is, and those who enjoy swimming in the sea of privilege are far more likely to remain unaware of the noise and bias polluting AI’s algorithms. Besides, we cannot expect the turkey to vote for Christmas, or those who are the status-quo to challenge it.

And yet, we also see significant opportunities for AI to boost diversity and inclusion.

USE IT AS AN INCLUSION DIAGNOSTIC

Although diversity is hard, inclusion is even harder. Furthermore, diversity without inclusion does not work—and even backfires. Even when organizations manage to attract and recruit diverse candidates, it is not easy to embed them in their current cultures, ensuring that they are not just tolerated, but also celebrated for being different. And yet, this is how one strengthens the culture. Hiring for culture add rather than culture fit, by definition, will inhibit diversity.

If we use algorithms/AI to model or measure “inclusion” (people’s networks, spot hidden biases in language use, and so on), we can actually help organizations diagnose problems. For instance, natural language processing could be used to detect if diverse employees are addressed with more negative or aggressive words; email metadata could reveal if diverse employees are excluded from central social networks in the organization; and reaction time measures (latency) to email responses could reveal whether diverse employees have their emails ignored for longer, and so on.

Even microaggressions, which are hard to detect intuitively, could be identified using AI because algorithms could be trained to spot offensive euphemisms and passive-aggressive mannerisms.

IMPROVE HOW WE MEASURE PERFORMANCE

This is arguably the biggest opportunity to use AI to boost meritocracy. Since most so-called performance metrics (such as annual performance ratings, job interview ratings, and even 360-degree feedback ratings) are contaminated with politics, nepotism, and privilege (all common and pervasive biases), algorithms could be trained to quantify the value workers actually add to their teams, units, and organizations.

This would simply require identifying patterns connecting what employees (and leaders) do and salient organizational outcomes, which would reduce the current gap between an employee’s career success and their actual performance or productivity. Since algorithms can be trained to focus on what matters (sales, revenues, profits, productivity, innovation, engagement, and turnover) while ignoring that which shouldn’t matter (gender, raceageattractiveness, and so on), there is a great opportunity to make performance management more data-driven through the use of AI.

AI is not bad for diversity—if diversity is part of the design it self. This will entail strict ethical criteria, like testing for adverse impact, bias, and ensuring it improves the outcome compared with how selection is done today. In other words, AI does not need to be perfect in order to be useful. It only needs to be better than what we have today.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Katarina Berg is the chief human resources officer at Spotify and she’s also head of the company’s Global Workplace Services and Strategy Operations teams. Tomas Chamorro-Premuzic, PhD, is the chief talent scientist at ManpowerGroup and a professor of Business Psychology at Columbia University and University College London. He is the author of Why Do So Many Incompetent Men Become Leaders? (And How to Fix It). More

More Top Stories:

FROM OUR PARTNERS