Sam Altman is being reinstated as the CEO of OpenAI after unexpectedly being removed from the position by the board last Friday.
OpenAI is responsible for the creation of ChatGPT and recently announced a
new product which will allow companies to create custom versions of AI chatbots, which the company has called GPTs, without any coding.
However, the company has been in turmoil after the ousting of the CEO and his reinstatement at breakneck speed. Let’s have a look at what actually happened.
Why was Sam Altman fired?
The board of OpenAI released a statement on Friday removing Altman as CEO and placing Mira Murati, the company’s chief technology officer, as interim CEO.
OpenAI’s board of directors who made this decision consisted of OpenAI chief scientist Ilya Sutskever, as well as independent directors Adam D’Angelo, Quora CEO, Tasha McCauley, CEO of GeoSim System, and Helen Toner, Georgetown Center for Security and
Emerging Technology.
The board said this decision came after a “deliberative review process” which concluded that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”
This is an unusual move from the board, and quite drastic for a growing company which just released its newest product that promises to make a large change to the AI market.
At this time it is not clear what Altman was not “candid” about to the board.
It was also announced that Greg Brockman, president of OpenAI, would be stepping down from the board but retaining his position in the company. Following the statement going public, Altman announced he had quit on X.
The board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions
to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward.”
How did Altman get reinstated?
Microsoft, who have an extended partnership with OpenAI,
announced on Monday that they would be bringing both Altman and Brockman under their wing in a new advanced AI research team.
After this was announced on Friday, the news was not taken well by those within OpenAI.
According to Reuters, many of the company’s employees were blindsided by the news, only seeing it the change of leadership through the OpenAI’s front-facing blog.
In response to the news, 730 of the 770 staff at OpenAI
signed a letter saying they would resign from their current positions and join Altman and Brockman at Microsoft.
With the majority of the talent in the company threatening to leave, the board and shareholders were likely put under immense pressure. The company itself could have collapsed.
Even board member Sutskever signed the letter, and
posted on X of his regret in the decision to oust Altman: “I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.”
Then on Wednesday OpenAI announced on X that they had reached an “agreement of principle” for Altman to return as CEO. This will be accompanied by a shakeup in the board with only D’Angelo returning. Bret Taylor, former co-CEO of Salesforce, joined as chair
and Larry Summer, former US secretary of the treasury, as a member.
Is OpenAI in trouble?
The dust has not settled on this situation and there still seems to be a fair amount of panic and discord surrounding the events of the last week.
For some, this has only served to highlight the instability of ventures like OpenAI.
David Pakman, head of venture investment at CoinFund has argued: “This episode shows us, once again, the weakness of the centralised Web 2.0 model. Reportedly more than 10,000 companies were building on top of OpenAI and the company may have evaporated in
48 hours because of human (not technical) fallibility. Decentralised approaches are better to build long-term stable developer platforms that can’t evaporate over an argument.”
There is speculation across the internet over whether the board or Altman were trying to protect the ethical AI goals set up in
OpenAI’s charter, however, nothing has been confirmed.
What’s definite is that such chaos and insecurity is not a good look for a company which has already created a lot of fear of AI after the release of ChatGPT. AI is a tricky topic and one for which people are calling for a lot of oversight, and there will
be a demand for more transparency over what actually happened here.