• | 8:00 am

Why OpenAI needs Microsoft on its board

The recent saga shows that OpenAI can’t stop AI development—it can only guide it.

Why OpenAI needs Microsoft on its board
[Source photo: SEBASTIEN BOZON/AFP via Getty Images]

The OpenAI management saga, which began two weeks ago when the organization’s board unexpectedly fired CEO Sam Altman, and which has since featured a series of Succession-worthy twists and turns, seems to be nearing an end. It’s still not entirely clear why Altman was fired in the first place. (Many accounts suggested the board was concerned that Altman was being reckless in his race to commercialize OpenAI’s artificial-intelligence technology, but board members and the company’s interim CEO have said that was not the problem.) But what is clear is that he has emerged on top: He was restored as CEO after 95% of the company’s employees threatened to quit, and in a memo to employees last week wrote that he is “so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.”

Three of the four board members who voted to oust Altman, meanwhile, are gone, with the new board currently consisting of just three people—chair Bret Taylor, Larry Summers, and holdover Adam D’Angelo. And those three are, according to a memo from Taylor, planning to re-make the governance structures at OpenAI, including expanding the board and, most strikingly, giving Microsoft—OpenAI’s most important partner and owner of a 49% stake in the organization’s profit-making subsidiary—a seat as a “non-voting observer.”

The inclusion of Microsoft on the board, even as a non-voting observer, is a dramatic change for OpenAI. It was founded in 2015 as a non-profit devoted to the creation of artificial general intelligence for the benefit of “humanity as a whole.” And though it started that profit-making subsidiary in 2019 to accelerate the work of developing AI, OpenAI’s mission statement was explicit about the fact that its fiduciary duty is not to investors, but rather “to humanity.” In other words, its mission was to develop AGI, but only if it could be done safely. That’s why the subsidiary was put under the complete control of the nonprofit, which was run by a board of independent directors and included no outside investors or partners, who might put commercial interests ahead of the organization’s mission.

Given that, it’s easy to see the decision to add Microsoft to the board as a sign that, ultimately, the money men have won, and that, in the development of AI, commercial imperatives will now trump any safety concerns. But there’s another way of looking at the move, namely that it’s remedying a flaw in the way OpenAI was set up. Not having Microsoft or other investors represented on the board may have seemed like a logical way to insulate the organization from commercial pressure. But over time, it actually made it harder for the organization to fulfill its mission.

Why? The simple answer is that, at this point, OpenAI can’t stop AI development—it can only guide it. If, for instance, it decided that it was going to shut down the profit-making subsidiary because it was being reckless in its approach, that would not make the problem go away. Instead, what would happen is what happened when it appeared that Altman was fired: The staff and technology will just decide to migrate elsewhere, either to startups or to Microsoft, where the only fiduciary duty managers have is to the bottom line, and there is no mission statement that requires people to take safety into account.

In other words, shutting the company down would not only not make the future development of AI safer, but by throwing it entirely into the commercial realm, it might well make that development more dangerous. And since commercial development is inevitable, what OpenAI needs is a board that’s not indifferent to it, but rather a board that’s striking a balance between the speed and scope of development and the needs of safety. And, paradoxically, a board that included investors and corporate partners would be more likely to strike that balance.

That’s not just because the nonprofit would have a better understanding of what partners like Microsoft want. It’s also because Microsoft would get a better sense of the nonprofits’ safety concerns. That doesn’t mean that it would always take those concerns seriously, but at least it would be aware of them.

Of course, there’s no guarantee that OpenAI’s new board will find a way to balance development and safety. (In fact, it may be that, in the absence of government regulation, that balance is impossible to strike.) But what it should be able to do, at least, is avoid the problem OpenAI just faced, namely the risk of blowing the organization up and having Microsoft and others ready to swoop in and pick up the pieces. That would be a much worse outcome, from the perspective of OpenAI’s mission, than keeping the organization intact, even if it means Microsoft exerts more influence. As Michael Corleone put it, keep your friends close, but your enemies—or, in this case, your frenemies—closer.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

James Surowiecki is the author of The Wisdom of Crowds, and has written business columns for The New Yorker and Slate, and written for a wide range of other publications. More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter