OpenAI’s conflicting objectives of non-profit and for-profit led to the removal and reinstatement of the CEO.

OpenAI’s conflicting objectives of non-profit and for-profit led to the removal and reinstatement of the CEO.

The company behind ChatGPT was not founded to be a business, in contrast to Google, Facebook, and other internet behemoths. The founders hoped that by establishing it as a charity, it would be independent of business concerns.

However, things got tough with the arrangement.

Even though OpenAI eventually switched to a for-profit business model, its board of directors and nonprofit OpenAI Inc. continue to hold the majority of the company’s shares. Four OpenAI board members, including the company’s chief scientist, two outside tech entrepreneurs, and an academic, were able to remove CEO Sam Altman on Friday thanks to this special organizational structure.

The sudden departure of one of the most sought-after AI specialists in the world sparked a staff uprising that threatened the organization’s existence as a whole and highlighted the unique structure that distinguishes OpenAI from other tech companies.

Major tech companies almost never have this kind of setup.

Facebook’s parent firm Meta, along with Google and other companies, is fundamentally structured in the exact opposite manner, granting founders full authority over the company and the board of directors via a unique voting share class that isn’t accessible to the general public. The inspiration for the concept originated with Berkshire Hathaway, which was founded with two classes of stock to protect the company and its executives from short-term profit-seeking investors.

Building artificial intelligence that is “generally smarter than humans” in a safe manner is the declared goal of OpenAI. The goal and its potential contradiction with the company’s growing commercial success have been hot topics of discussion.

With this board structure, it became clear that they were only thinking idealistically—that is, that we are all in agreement and seek the same goal. And we’re going to stay in alignment, so it won’t become an issue,” said Sarah Kreps, director of Cornell University’s Tech Policy Institute.

“I think that’s where these issues erupted,” the person said, referring to the acceleration of AI technology in the last year due to additional funding.

The board declined to provide precise explanations for Altman’s termination. Microsoft Corp., which has invested billions in OpenAI, promptly hired Altman on Monday. Along with at least three other people, Microsoft also hired Greg Brockman, the president of OpenAI, who quit in protest after Altman was sacked.

Furthermore, Microsoft has issued employment offers to every one of the 770 workers at OpenAI. If sufficient workers accept Microsoft’s offer or go to competitors who are actively seeking them, OpenAI may virtually vanish in the absence of workers. Microsoft will continue to own the majority of its current technology and have an exclusive license to use it.

In an ambiguous statement made upon Altman’s dismissal, OpenAI claimed that an investigation had revealed that he had been “not consistently candid in his communications” with the board, which had lost faith in his capacity to steer the business.

The statement made no mention of Altman’s purported lack of candor or provided any instances. The business claimed that his actions made it more difficult for the board to carry out its duties.

According to Kreps, the board did itself a disservice by dismissing Altman because it “seems to be associated with a safer, more cautious approach” to AI. It behaved in a way that made “no company left to implement a pro-safety philosophy” and alienated the majority of the employees.

Following a tumultuous weekend in which one interim CEO was replaced by another, Ilya Sutskever, a board member of OpenAI and a major force behind the shakeup expressed sorrow for his role in the CEO’s removal.

“I never meant to cause harm to OpenAI. He wrote on X, formerly known as Twitter, on Monday, saying, “I love everything we’ve built together and I will do everything I can to reunite the company.”

OpenAI had six board members up until Friday. These days, Sutskever, the chief scientist and co-founder of OpenAI, Adam D’Angelo, the CEO of Quora, tech entrepreneur Tasha McCauley, and Helen Toner from the Georgetown Centre for Security and Emerging Technology make up the board.

The number of members on the board was higher earlier this year.

The individuals that left the board included Neuralink executive Shivon Zilis, former Republican U.S. Representative Will Hurd of Texas, who briefly ran for president in 2024, Reid Hoffman, the founder and investor of LinkedIn, who co-founded another AI company last year, and Brockman, who departed following Altman’s dismissal.

Elon Musk, the CEO of Tesla, and Altman served as OpenAI’s first co-chairs when the company was first formed.

If not for a crucial falling out between Altman and Musk in 2018, the board might not have found itself straddling the conflicts between its nonprofit framework and the company’s for-profit arm.

Elon Musk abruptly left OpenAI, reportedly due to a possible conflict of interest between the business and Tesla, the electric carmaker that has helped build his personal fortune, which is currently estimated to be worth over $240 billion.

Musk expressed his worry on Twitter earlier this year that Microsoft was misguiding OpenAI in its pursuit of ever-increasing revenues. Musk just established xAI, his own AI business, to take on competitors like Microsoft, Google, and OpenAI.

Board members of OpenAI have not answered enquiries for comments. One of the more well-known members of the remaining four is D’Angelo, a former employee of Facebook who co-founded Quora in 2009 and continues to serve as its CEO.

In 2018, D’Angelo tweeted, “I continue to think that work towards general AI (with safety in mind) is both important and underappreciated, and I’m happy to contribute.” This was his first tweet after joining the OpenAI board.

As recently as November 6, when he openly questioned the findings of a Google research report that demonstrated evidence that present AI systems cannot generalize beyond their training data, he publicly dabbled in the prospect of AI surpassing humans. That implies that they are less capable than some scientists had previously believed.

A few months prior, D’Angelo said on social media that “the most significant development in world history will most likely occur during our lifetimes,” referring to artificial general intelligence.

Facebook20k
Twitter60k
100k
Instagram500k
600k