- | 8:00 am
Workers are hungry for guidance on using AI, new research shows
Here’s how business leaders can help their employees embrace experimentation.
In recent months, there’s been an explosion of interest in ChatGPT and other generative AI technologies like DALL-E 2 and Google’s Bard. You may have tried out some of these tools yourself or heard of colleagues or competitors using them.
Whether you’re a business leader, manager, or employee, your list of AI concerns and curiosities is probably endless—and you’re not alone. Everyone seems to be scrambling to figure out what ChatGPT means for the future of their work and how to use it effectively and ethically. As generative AI begins to disrupt almost every job and industry, it’s critical to get out ahead of the uncertainty and chart a path forward.
I’ve spent years studying technology and communication in the workplace. Over the past few months, I’ve been speaking to and surveying employees and business leaders across the country about how they’re using ChatGPT and how their organizations are developing AI norms and guidelines. Fortunately, there are concrete steps, rooted in trust and transparency, that you can take to build policies for your organization and harness AI’s benefits while avoiding its pitfalls.
Many professionals are already using ChatGPT at work for tasks such as drafting emails, doing research, analyzing data, generating code, creating memos, and writing social media posts. Recent surveys my colleagues and I conducted reveal that employees are wildly enthusiastic about its benefits. In fact, 82% of early adopters (who have used ChatGPT five or more times) believe it will make them more productive; 85% say it will help generate ideas; and over 70% report that it can improve work quality and help them communicate more effectively.
While employees are experimenting with ways to bring AI into their work, many are unsure how they should be using ChatGPT, especially if their organization doesn’t have AI policies or norms in place. We found that workers are hungry for guidance, with the majority of early adopters saying it would make them more comfortable using ChatGPT and 66% noting that clearer guidance would improve efficiency. But we also found that employees are not sure how to broach the topic with their managers.
Managers may have similar reservations, but it’s critical they initiate these discussions. In workplaces that have AI policies in place, the benefits are obvious. Among those surveyed, 100% of early adopters said AI policies boosted efficiency and offered legal protections, 73% said they improved trust, and 67% said policies made them more comfortable using the technology.
Starting an open dialogue about ChatGPT is essential. Around half of employees said they’d be more likely to use AI after having a discussion with their manager. However, our research found that these conversations were only effective if they occurred in a workplace with psychological safety where employees feel comfortable speaking up with their ideas and concerns. If not, the discussions were often counterproductive, making employees either less likely to use AI or more likely to hide using it.
My research clearly shows that employees are craving guidance on how to use AI. How can your organization help them navigate this uncertain new terrain? I believe we must prioritize fostering a culture of trust and inclusivity and build on that foundation to create collaborative AI policies.
Workers are discovering countless unexpected ways to use AI, and the technology continues to advance rapidly. This requires flexible policies that are responsive to change. Employees need the opportunity to experiment with AI and determine how it impacts productivity and work relationships before they can develop informed perspectives. Blanket policies created without transparency and collaboration will hinder this necessary exploration.
The path I recommend is rooted in my research on using a social contracts approach that brings all stakeholders together to collaboratively experiment and transparently develop organizational policies and norms.
Organizations should think of themselves as laboratories where employees can design and engage in structured experiments with new AI technology. Gather employees with varying expertise and roles to have recurring, cross-organizational conversations about their AI use and best practices for integrating it into their work. By involving employees from many levels and divisions, you can identify the most effective ways to capture the value of tools like ChatGPT across your organization.
In between these conversations, employees should be encouraged to explore new uses for ChatGPT. But instead of a free-for-all with each person running their own experiments, aim for a structured and facilitated approach. If everyone on a team is trying out AI in similar ways, it provides common experiences and language to talk about the technology. This can foster bonding, trust, and safety, which are essential for dealing with disruptive, sometimes scary, change.
I’ve spoken with a number of business leaders who are creating AI policy laboratories within their existing organizational structures. A senior manager at a financial services company told me every team spends the first 15 minutes of their weekly meeting discussing how they used ChatGPT over the past week and what they learned. They then share what they will try to do with it in the upcoming week so they can experiment in coordinated ways.
An executive at a well-known Fortune 100 company told me that her organization regularly brings together employees with many functions—marketers, financial analysts, and developers—to share AI use cases and engage in a bottom-up approach to developing best practices. While navigating disruption can often be stressful, she instead described this approach as invigorating and collaborative.
One manager at a healthcare organization described to me how her company highlighted various uses of generative AI at a series of all-hands townhall meetings. At these meetings, senior leaders gave employees the green light to thoughtfully experiment and managers from HR, sales, and R&D demonstrated how AI tools could be used in different areas of the business.
At my own institution, the USC Marshall School of Business, we have launched a summer laboratory initiative on generative AI. Each week, a group of professors will explore a new way to use AI in their work. Then, we will meet to discuss four key issues: the teaching benefits of the use case; other potential applications for it; concerns with the use case, including ethics; and the broader implications for business education. The group will document these experiences and share summaries with the entire community.
Not all organizations have the learning-oriented culture necessary to create an AI laboratory. In more hierarchical and rigid organizations, I recommend engaging in pockets of experimentation. You can convene a steering committee of individuals to conduct small, guided experiments in select departments, then report back on their findings and collaborate on recommendations for organization-wide AI policy.
The challenges of implementing any new technology are as much about people as the technology itself. Whatever your process ultimately looks like, it’s critical that it be built on trust, collaboration, thoughtful experimentation, and common experiences. With these key elements in place, you can ensure AI policies are robust, responsive to employee needs and company goals, and nimble enough to evolve.
Generative AI isn’t going anywhere. Your employees—and your competitors—will continue experimenting with ChatGPT. It’s up to you whether you confront this new opportunity directly and thoughtfully, or risk falling behind in the new world AI is opening up.