Sam Altman was reinstated as OpenAI’s chief executive on Tuesday, capping a five-day-long chaos at the artificial intelligence company. OpenAI’s board of directors, which ousted Altman last week, was reshuffled as part of an agreement between the company and the chief executive that paved the way for his return. The OpenAI debacle threw light on the ongoing debate over how quickly and carefully AI can be deployed. It could also mark a shift in the function of OpenAI — investors like Microsoft might have more sway, concerns about potential risks that AI pose could take a backseat, and the focus would be the rapid commercialisation of the technology. What happened? On November 17, OpenAI’s board dismissed Altman over a video call, accusing him of being “not consistently candid in his communications with the board”. Greg Brockman, president and the company’s co-founder, was stripped of his board seat and he then resigned in solidarity with Altman. About three days later, Microsoft announced that it had hired both Altman and Brockman to lead a “new advanced AI research team”. The news sparked an uproar among the employees of OpenAI. On Monday, nearly all 800 of them signed an open letter, threatening to quit and follow Altman to Microsoft unless the board members who dismissed the chief executive reinstate him and resign. The following day, OpenAI announced Altman’s return. The company said its board members, Helen Toner (an AI safety researcher at Georgetown University), Ilya Sutskever (an AI researcher and company co-founder), and Tasha McCauley (a technology entrepreneur), have been removed. The only holdover was Adam D’Angelo (head of Q&A forum Quora). Bret Taylor (a former co-CEO of Salesforce, another big software firm) and Larry Summers (an economist, who served as Bill Clinton’s treasury secretary) would join the board. OpenAI also indicated that it might expand the board to nine members — Microsoft was expected to get a seat and Altman might get his back. Why did it happen? Beyond its official statement, OpenAI didn’t give many details on why it booted out Altman. Media reports, however, suggested that the dismissal was the culmination of the growing rift between Altman and other boards of directors. OpenAI broke into the mainstream after the launch of the chatbot ChatGPT in November last year. It made OpenAI one of the foremost tech companies in the world and Altman became the face of the generative AI revolution. But the success brought a set of new challenges. Some members, including Sutskever, were increasingly worried about the potential dangers that the company’s technology posed to society. They also felt that Altman wasn’t focusing enough on these risks and was more concerned about building OpenAI’s business. These anxieties were exacerbated when recently the company made a breakthrough that helped its AI models get better at solving problems without additional data. It triggered such alarm with some OpenAI researchers that they wrote to the company’s board before Altman’s dismissal warning it could threaten humanity, Reuters reported. Besides this, just weeks before his ouster, Altman had an argument with Toner over her recent research paper that compared the approaches to safety taken by OpenAI and rival company Anthropic. “Mr Altman complained that the research paper seemed to criticise OpenAI’s efforts to keep its AI technologies safe while praising the approach taken by Anthropic,” according to a report by The New York Times. Toner defended her paper, saying it “analysed the challenges that the public faces when trying to understand the intentions of the countries and companies developing AI. But Mr Altman disagreed,” the report added. Following the confrontation, Altman tried to push out Toner but in the end, he was the one to leave as Sutskever, McCauley, D’Angelo, and Toner joined forces to sack him. Another reason for the bad blood among the board members was their failure to agree upon the candidates to fill the board’s vacant seats. What happens now? The shake-up of OpenAI’s board will likely result in efforts to maximise the commercial potential of AI. Although the views of new board members, Taylor and Summers, on the technology aren’t publicly known, they seem to be more understanding of Altman’s “empire-building ambitions” than their predecessors, according to The Economist. The company is also expected to double down on the development of GPT-5, its most powerful model yet, whose progress has slowed down in the recent past. A faster evolution of OpenAI’s technology will further heat up the competition in the generative AI market. Google, Amazon, Meta, and start-ups like Anthropic have already introduced their AI models and plan to launch even more such products. Experts, however, worry that this might cause the release of products whose behaviour, uses, and misuses, aren’t fully understood, which could be detrimental for society. The safety concerns could compel politicians to take stringent steps to regulate AI tools. For instance, last month, US President Joe Biden signed an executive order that compels leading AI developers to share safety test results and other information with the government.