• | 8:00 am

Does AI have a role in defining a company’s purpose? Here’s what business leaders say

A company’s purpose or mission statement is a deeply unique and genuine part of its DNA. Can AI discover a company’s “soul”?

Does AI have a role in defining a company’s purpose? Here’s what business leaders say
[Source photo: Eugene Mymrin/Getty Images]

Disclaimer: This article was completely written by humans.

How much of the content you consume on a daily basis was written by a person versus a generative AI tool? Would you be more or less willing to accept perspectives offered via AI than those coming from a human? Now, think about a company’s purpose, values, or mission statement, and how each is brought to life by an organization and its employees. Should AI be writing purpose statements, shaping values, or dictating how employees make decisions in line with their company’s mission statement? If your head is spinning, you’re not alone.

The emergence of generative AI tools like ChatGPT, Gemini, Perplexity, Claude, and so many others have transformed the way we work, think, and act. (Let’s get one thing clear before continuing: AI has been around for decades, but it’s the “generative” capabilities that are accelerating the technology as of late.)

One question that’s been weighing heavily on our team at Carol Cone ON PURPOSE over the past year is whether generative AI should play a role in defining a company’s purpose. This question is less directed at “will we go out of business?” than “is AI a hindrance or a supercharger for corporate purpose initiatives?” Can a large language model uncover the nuanced insights needed to reflect an organization’s heart and soul?

We believe purpose is a deeply unique and genuine part of a company’s DNA. With that said, we understand the tremendous capability of large language models to distill data and sift through information to surface insights or make connections in a fraction of the time it would take a human. So, we asked recent guests of our Purpose 360 podcast for their take on the question: What is the role of purpose in an AI world?

Purpose is deeply human. AI is not

Our experts agreed on this: “Purpose is more important in an AI-driven world than a non-AI-driven world,” said author and professor Christopher Marquis. “Having humans in the loop is essential because we want to make sure that any technical system is working in ways that are for the benefit of society and humanity and the environment, and not reinforcing some of the challenging issues that exist.”

Daniel Aronson, founder and CEO of Valutus, put it succinctly: “The future of purpose is to become a leverage point for the difference between humans and AI.” We agree. As the generative AI buzz peaked earlier this year, we had an inkling that it would always be a support, but never a replacement for the role of humans in defining and developing an organization’s purpose.

For example, some of the most important pieces of information our team have ever surfaced about a company’s eventual purpose have come to light during stakeholder interviews—usually with entry and lower level employees within an organization, not the folks writing corporate manifestos. While AI might be useful in creating interview transcripts and summarizing hundreds of conversations with employees, we don’t think it will ever be able to capture what it’s like to spend every day of a career on a factory floor.

“In an AI driven world, I think the role of purpose is to keep us human,” said Jenny Lawson, CEO of Keep America Beautiful. “It is the thing the computer doesn’t have yet.” Put another way, “AI isn’t going to replace the heart,” said Buffy Swinehart, Senior Manager, Corporate Social Responsibility at Aflac.

Unchecked, AI can be deeply biased

Purpose leaders will also be challenged to ensure AI doesn’t perpetuate or introduce inequities when used in their organizations. “AI brings with it a lot of societal risks,” said Martin Whittaker, CEO of JUST Capital. “It can undermine trust in systems and trust in companies, but it can bring about huge benefits as well. And so how that shakes out, how the wealth that gets created from the deployment of AI over the next five years, is going to have a major impact.”

When it comes to inequality and bias, social impact innovators should feel empowered to leverage AI against those inequities. For example, Microsoft is putting AI tools to work for disabled people through its AI for Accessibility program, while The Trevor Project launched an AI tool to train crisis counselors before they interact with real people in crisis. But while AI can be a tool to make sure details aren’t missed, as leaders we need to ensure the data sets we provide are inclusive and complete, reflecting not just C-suite opinions, but the full spectrum and diversity of a company’s people and knowledge.

These perspectives underline the role that we, as purpose-driven leaders, will play in the adoption of generative AI tools in our organizations. Will we ultimately become the keepers of a corporation’s “soul”? Kevin Martinez, VP of Corporate Citizenship at ESPN, believes the answer is a resounding yes: “Don’t let anyone tell you you shouldn’t have a voice here. It is absolutely instrumental because the issues of AI that exist right now—our privacy, transparency, truth, being able to identify what is real and not real—are all elements that corporate responsibility should be part of.”

Like any new technology, generative AI presents both opportunities and risks. Here are a few that our experts highlighted for social impact professionals to keep in mind as they experiment with and integrate AI tools in their  work.

The opportunities for AI and corporate purpose:

  • Finding & scaling solutions: From a sustainability perspective, there’s a lot that AI can do to help us to analyze data and focus our efforts,” said Justina Nixon-Saintil, Vice President and Chief Impact Officer at IBM. “We can use AI to solve some of those biggest issues.”
  • Increasing efficiency: “Some people on our team are using AI to help shape the strategies we’re developing,” said Neill Duffy, Chief Executive and Founder at 17 Sport. “It just makes us more efficient. It comes down to execution and it comes down to authenticity.”
  • Supporting employees: “AI can better understand what you need and help connect you to the right jobs, help connect you to the right learning pathways,” said Nixon-Saintil. “I believe it will actually help raise the skill set and the opportunity for many of the communities we work with.”
  • “Living” purpose: “Reaching people with compelling purpose messages is difficult. Purpose can evolve from static communications to engage people in immersive experiences. Imagine AI powered chatbots having meaningful conversations about a company’s purpose,” said Chris Noble, CEO of Matchfire.

The watch-outs:

  • Sacrificing values: “The idea of purpose is ‘What do you stand for?’ That comes down to values,” said Whittaker. “I like to think those don’t change much. I think that’s something enduring and is a source of great hope for the future as well.”
  • Perpetuating biases: “Just as in any technology that accelerates quickly, you have to be very careful that you do not leave underserved communities behind,” said Nixon-Saintil. “ We have to make sure we are reducing biases and that we make the technology available for all.”
  • …and inequities: “I’m concerned by who is making AI,” said Ziad Ahmed, Head of Next Gen at United Talent Agency. “Are they thinking about use cases that empower marginalized communities and that protect those that are most vulnerable? That’s a real crisis because that means AI only serves to perpetuate norms that leave many feeling excluded and even more endemically misunderstood.”

Perhaps the biggest threat generative AI poses to practitioners in this field of work is an eventual overreliance on the tools that use it, and subsequent loss of creativity, critical thinking, and—eventually—trust. In his book Mastering AI, Jeremy Kahn writes that one of the main dangers AI poses is “rewiring our brains, degrading our ability to think critically, write cogently, and even relate to one another, unless we all take decisive action to prevent this.”

As our experts illustrated, it will be important for social impact leaders and teams to be curious about AI, but mindful of the need to strike a balance between such tools and empathy and human intuition. “Empathy is something that no AI will ever have,” Kahn says. “It’s the key to holding onto human preeminence in a world of ever more capable machines.”

That’s why—even before the emergence of generative AI—we defined a corporation’s purpose as “a reason for being beyond profits, grounded in humanity.” It’s the last part that we risk losing when we place more trust in algorithms than human intuition. Both can work in harmony, and we believe the true breakthroughs of the coming months or years will be at the intersections of technology and human ingenuity—as long as we keep personal and company values at the forefront.

Generative AI has changed the way we work. It is up to us how we leverage it for humanity.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

FROM OUR PARTNERS

Brands That Matter
Brands That Matter