- | 12:30 pm
AI can hurt worker morale when managers don’t have these specific skills
AI can improve productivity, but it can also alienate workers. Managers with strong social skills are needed to mediate between employees and technology.
From Microsoft to Barclays Bank, corporations are increasingly using AI to track employee productivity, automate performance evaluations, and recommend job improvements.
A survey released before the pandemic found that 90% of global business leaders were planning to implement or expand AI in their companies. This trend has only accelerated as bosses want to keep an eye on remote workers by monitoring their phone and computer usage.
Companies are banking on AI to improve the productivity of their employees by analyzing large amounts of data with greater precision than humans. This increases the speed, accuracy, and helpfulness of performance evaluations, offering high-quality, detailed feedback to workers in a way that managers cannot.
AI isn’t perfect, however. The use of flawed, biased data complicates the assumption that machines are impartial. Employees fear technology infringing on their privacy and undermining their autonomy. At the end of the day, people are social creatures and, in most scenarios, prefer human interactions over AI.
Over-dependence on AI to monitor employees and provide feedback can crowd out valuable human input and context, such as employee contributions that are not easily quantified. It can leave workers feeling alienated and erode their trust in employers. If managers simply turn over performance feedback duties to machines, employee well-being and job performance may suffer, undoing any potential gains from AI.
For AI to reach its full potential, it must be in the hands of caring managers who possess strong social skills. Machines can take on physical and, increasingly, cognitive tasks but human social skills will remain essential for mediating between people and technology.
Last year, my colleagues and I published a study on AI feedback in the workplace at a large financial services company. We found that employees who received AI-generated feedback achieved 13% higher job performance than those who received feedback from human managers.
However, this effect was dampened when employees knew the feedback was coming from a machine. Performance fell by 5.4% compared to workers who were told the feedback came from human managers.
Our research shows that AI can beat human managers in generating high-quality structured feedback, but human managers handily trump AI when it comes to gaining employee trust and buy-in. Surveys found that the employees in our study who were told they were receiving AI feedback were less trusting of the quality of information and more concerned about losing their jobs to machines. This aligns with existing research showing that workers tend to have lower trust in the quality and fairness of AI-generated feedback.
Being conscious of AI’s limitations doesn’t imply that we should discard what it tells us. It simply means using AI-generated information as a resource, rather than a dictate. For instance, in some hospitals, doctors are not allowed to look at AI recommendations before they formulate their own judgements, to avoid being swayed. Managers in the public and private sectors similarly need to draw on AI as one tool in their toolbox.
A good manager—one with tact, empathy, trust, and a nuanced understanding of what AI systems are telling them—can make all the difference in alleviating the tensions caused by technology. Employees respond better to transformational managers who develop relationships with them and care about their development far more than transactional ones who prioritize maximizing productivity through carrots and sticks.
Through our ongoing research, my colleagues and I have found that employees achieve higher job performance and are more accepting of AI-generated feedback when it comes from managers with transformational leadership styles, as opposed to transactional styles. This suggests that managers can’t just be messengers—they must be mediators between people and machines.
Managers need to understand the information being generated and discuss it with their teams. This must be a collective process to make improvements, where AI is seen as a helper rather than a monitor. If executives don’t accept this and instead opt to place blunt messengers and commanders in managerial roles, the potential gains from investment in AI can be erased due to lack of employee buy-in.
Here are three important steps that leaders and managers can take to reap the benefits of AI without hurting their workforces and bottom lines:
- AI-generated feedback should be discreet and incorporated naturally into workflows. Employees will not thrive if they feel constantly surveilled and evaluated by machines tracking every second of productivity.
- Managers should draw on computer-generated feedback as one source of information and incorporate it into their overall feedback to employees. The feedback should ultimately be seen as coming from them, not the AI.
- Organizations need to recruit, retain, and reward transformational managers with strong social skills who can help soften the blow of AI feedback and obtain employee trust and buy-in.
A rapid shift to AI in the workplace can backfire if it’s not implemented thoughtfully. It can break down relationships between managers and employees, harm the well-being of workers, and lead to an exodus of talent. It can also cause economic dislocation and tears in our societal fabric, as previous waves of mechanization and automation have.
Social skills will be the most valuable competency in the AI-driven workplaces of the future. For employees to thrive, managers and transformational leaders will need to mediate between technology and workers.
With people in the driver’s seat, we can have the best of both worlds: AI replacing the most burdensome tasks in the office and supporting workers so they can devote themselves to creativity and meaningful interactions.