OpenAI Establishes Preparedness Team Amid Evolving AI Landscape

With these principles, OpenAI has taken a significant step in addressing global risks associated with artificial intelligence. To further this important work, in 2023, the organization created a specialized unit for preparedness. This team’s experience is shown by the team’s efforts in modeling the study of catastrophic risks. Specifically, they highlight how cybersecurity threats and…

Lisa Wong Avatar

By

OpenAI Establishes Preparedness Team Amid Evolving AI Landscape

With these principles, OpenAI has taken a significant step in addressing global risks associated with artificial intelligence. To further this important work, in 2023, the organization created a specialized unit for preparedness. This team’s experience is shown by the team’s efforts in modeling the study of catastrophic risks. Specifically, they highlight how cybersecurity threats and nuclear dangers could arise from the perilous pace of AI innovation.

The recent update to OpenAI’s Preparedness Framework reflects the organization’s commitment to safety and risk management. The framework outlines the company’s approach to identifying and mitigating risks associated with frontier capabilities that could lead to severe harm. Notably, the preparedness team will examine risks such as phishing attacks and potential nuclear threats, underscoring the importance of proactive measures in safeguarding against such dangers.

OpenAI is poised to change its safety standards. If a competing AI lab releases an identical but higher-risk model with insufficient safety protections in place they will react to the competitive landscape as it develops. That flexibility is a key example of the organization’s commitment to safety at all costs—even as new and unexpected challenges continue to arise.

Former Head of Preparedness Aleksander Madry has been reassigned. See, he will now focus on AI reasoning just a few months after his team was formed. Madry’s transition is another signal of a strategic pivot within OpenAI as the organization looks to strengthen its expertise and proprietary models in advanced AI reasoning. Prior to coming to OpenAI, he was a technology reporter for Adweek. There, he shared enviable experiences and lessons on the cutting edge of the ethics of technology and journalism.

Even Sam Altman, OpenAI’s CEO, has acknowledged the growing worries over generative AI chatbots. He’s especially concerned with how these technologies will impact our mental health. He stressed the need for future capability development that helps improve our defensive cyber posture without being abused by bad actors.

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” – Sam Altman

Altman referenced the challenges posed by AI advancements when he stated that they are “starting to present some real challenges.” OpenAI seems to be aware of the consequences of its technology. The team is not resting on its haunches and is constantly working to mitigate these concerns through its preparedness efforts.

One of the coolest ones—the big one, actually—is coming to San Francisco October 13-15. In short, it should provide useful clues about OpenAI’s overall approach and direction. As the organization navigates the complexities of AI technology, its preparedness team will play a pivotal role in ensuring safety and resilience against potential threats.