Tuesday, May 21, 2024

OpenAI dissolves safety team - Martin Crowley, AI Tool Report

Last week, OpenAI co-founder, Ilya Sutskever, and key AI researcher, Jan Leike, both quit OpenAI’s “Superalignment” team, which was focused on implementing long-term AI safety protocols, with Leike openly expressing his frustration over the company’s priorities, claiming that “safety culture and processes have taken a backseat to shiny products.” Following Sutskever and Leike’s departure, OpenAI then decided to dissolve the AI safety team for good. In response to Leike’s frustration, OpenAI CEO, Sam Altman, and co-founder, Greg Brockman, (publicly) acknowledged that OpenAI has "a lot more to do” and needs “to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities.”

No comments: