OpenAI Establishes Team to Examine Potential Catastrophic AI Risks

In an unprecedented move, OpenAI, a leading player in the artificial intelligence (AI) landscape, has recently announced the formation of a team dedicated to assessing and examining potential catastrophic risks associated with AI. This initiative, known as ‘Preparedness,’ will focus on evaluating and mitigating these risks.

OpenAI’s initiative to anticipate and protect against catastrophic AI risks.

Introducing the Preparedness Team

The Preparedness team will be under the leadership of Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning and a recent addition to OpenAI. The primary tasks of this team include tracking and forecasting potential hazards posed by future AI systems. The spectrum of these dangers ranges from the AI’s capacity to deceive and manipulate humans, akin to phishing attacks, to their ability to generate malicious code.

Assessing AI Threats

Interestingly, the Preparedness team has been tasked with investigating a variety of risk categories, some of which may appear speculative. For instance, OpenAI has expressed concern over “chemical, biological, radiological, and nuclear” threats to AI models. OpenAI CEO Sam Altman has been vocal about his apprehensions regarding AI, often suggesting that unchecked AI development may lead to human extinction.

The revelation that OpenAI is prepared to devote resources to study scenarios that seem like they’re pulled straight from the pages of dystopian science fiction novels is quite a surprise, even for those familiar with Altman’s views. However, OpenAI has also expressed its willingness to explore more grounded and “less obvious” areas of AI risk.

Engaging the Community for AI Safety

Alongside the inception of the Preparedness team, OpenAI is also inviting ideas for risk studies from the community. The top ten submissions will be rewarded with a $25,000 prize and a potential position in the Preparedness team. OpenAI has posed a thought-provoking question to the community, asking them to consider the most unique yet plausible, catastrophic misuse of an AI model if they had unrestricted access to OpenAI’s Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALLE·3 models.

Formulating a Risk-Informed Development Policy

Another critical task for the Preparedness team is formulating a “risk-informed development policy.” This policy will outline OpenAI’s approach to creating AI model evaluations and monitoring tools, its risk-mitigation strategies, and its governance structure for oversight throughout the model development process. This strategy will supplement OpenAI’s existing efforts in the field of AI safety, with a focus on both pre-and post-model deployment phases.

OpenAI strongly believes in the potential of AI models to benefit humanity, but it also acknowledges the increasingly severe risks they pose. As such, it is vital to ensure the necessary understanding and infrastructure are in place for the safety of highly advanced AI systems.

The Significance of Preparedness

The announcement of the Preparedness team coincides with a significant U.K. government summit on AI safety. This follows OpenAI’s previous declaration to form a team to study and manage emergent “superintelligent” AI forms. Sam Altman and Ilya Sutskever, OpenAI’s chief scientist and co-founder, believe that AI with intelligence surpassing humans could be a reality within the next decade. This AI might not necessarily be benevolent, further emphasizing the need for research into ways to control and limit it.

Establishing the Preparedness team is a significant step forward in the ongoing efforts to ensure AI’s safe and responsible development and deployment. It acknowledges the potential dangers that AI can pose and demonstrates a commitment to mitigating these risks before they become a reality.