In late 2023, the sudden firing of Sam Altman sent shockwaves through the AI industry. Now, a new report from The New Yorker reveals the tensions that had been simmering below the surface at OpenAI for months. At the heart of the board’s decision was a growing concern that the company was prioritizing product deployment and revenue over safety and alignment.
In a December 2022 board meeting, Altman reportedly told directors that GPT-4’s new features had passed safety checks. But board member Helen Toner found otherwise. Some controversial features, like fine-tuning the model for personal use, hadn’t actually been approved. At the same time, Microsoft had released an early version of ChatGPT in India without completing the required safety reviews. Jacob Hilton, a former OpenAI researcher, said, “It just was kind of completely ignored.”
The internal concern deepened when Tasha McCauley, leaving the meeting, was discreetly informed of a safety breach in India. Microsoft had released an early version of ChatGPT there without completing the required review. Jacob Hilton, a former OpenAI researcher, said, “It just was kind of completely ignored.” The revelation suggested that even hours of briefings had not adequately disclosed potential risks to the board.
Inside OpenAI, some researchers sensed a shift in priorities. Carroll Wainwright described it as a “continual slide toward emphasising products over safety.” Former executive Jan Leike warned that the organization was “going off the rails on its mission,” placing product and revenue above alignment and safety.
The incident highlights a key lesson for the AI industry throughout. Moving fast is important, but safety and ethics can’t be ignored at all. The ongoing debate inside OpenAI is a warning for all companies that are developing AI. They should know about balancing innovation with responsibility. It is harder than it looks, where leadership decisions depend on how well that balance is maintained.
Also Read: OpenAI Eyes Middle East to Raise $50 Billion in Fresh Funding