OpenAI Replaces Dissolved ‘Superalignment Team’ With Safety and Security Committee Led by Sam Altman
Altman has been criticized for his supposed mishandling of AI safety.

Just a few weeks ago, OpenAI scrapped its Superalignment team, a task force dedicated to controlling AI systems smarter than human intelligence, after less than a year of operations. In a statement to Bloomberg, the company said that it would be “integrating the group more deeply across its research efforts to help the company achieve its safety goals.”
The apparent replacement for Superalignment squad is OpenAI’s Safety and Security Committee, helmed by CEO Sam Altman and directors Bret Taylor (Chair), Adam D’Angelo and Nicole Seligman.
Announced on OpenAI’s website, the first task of the Committee is to “evaluate and further develop OpenAI’s processes and safeguards” – from election integrity to monitoring for abuse and to security measures – over the next three months. After this period, the Committee will share their recommendations with the company’s board. The board will review its ideas and then share a public update on the recommendations they’ve chosen to adopt.
The move to put Altman in charge of the Committee is a controversial one, considering the CEO was temporarily ousted for his alleged mishandling of AI safety measures, among other concerns.
This month alone, OpenAI co-founder Ilya Sutskever shared that he would be leaving the company after a decade. Jan Leike, a lead safety executive who co-headed Superalignment, also resigned and announced he would be moving to rival Anthropic.
In a series of tweets, Lieke painted a bleak picture of OpenAI’s grasp on AI safety. Ahead of his resignation, he recalled that he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point.”
The executive said that while he believed OpenAI should be focusing on measures like security, safety, societal impact and monitoring, his team was “sailing against the wind.”
“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Lieke wrote. “But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.”
Building smarter-than-human machines is an inherently dangerous endeavor.
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
— Jan Leike (@janleike) May 17, 2024
Although Sam Altman said he would prepare a longer post in response to Lieke’s accusations, he has yet to release a statement over ten days later.