OpenAI invites MIT researcher to head its ‘Preparedness’ risk assessment unit
MIT’s Aleksander Madry and his team will monitor and prevent AI-borne threats
OpenAI, the Microsoft-backed US artificial intelligence (AI) organisation, announced a new team tasked with scrutinising and evaluating AI models to safeguard against potential “catastrophic risks.”
The team, named “Preparedness,” will be headed by Aleksander Madry, who serves as the director of MIT’s Center for Deployable Machine Learning. (Madry updated his LinkedIn profile in May to reflect his role as “head of Preparedness” at OpenAI.) The primary focus of Preparedness will encompass monitoring, predicting, and safeguarding against potential threats posed by future AI systems. This includes concerns such as their capacity to manipulate and deceive humans (as seen in phishing attacks) and their potential for generating malicious code.
Preparedness is tasked with studying a range of risk categories, some of which may appear more speculative than others. For instance, in a blog post, OpenAI highlights “chemical, biological, radiological, and nuclear” threats as particularly significant areas of concern in relation to AI models.
The statement made by OpenAI CEO Sam Altman is recognised for his concerns regarding AI, at times articulating apprehensions, whether driven by public perception or genuine belief, about the potential risks of AI leading to human extinction. Yet, the idea that OpenAI might be allocating resources to investigate scenarios akin to those found in dystopian science fiction novels is a more substantial development than I had initially expected, to be frank.
The company is open to examining “less apparent” and more practical aspects of AI risk as well. In parallel with the establishment of the Preparedness team, OpenAI is actively seeking suggestions for risk-related research from the community. The top ten submissions stand a chance to win a $25,000 prize and the opportunity to join the Preparedness team.