top of page

Top AI Executives and Experts Warn of AI's Risk of Extinction, Urge Global Priority

Date: May 31, 2023

In a significant development, a group of prominent artificial intelligence (AI) executives and experts, including OpenAI CEO Sam Altman, DeepMind and Anthropic CEOs, and executives from Microsoft and Google, have come together to raise concerns about the "risk of extinction from AI." In a letter published by the nonprofit Center for AI Safety (CAIS), they called for policymakers to consider AI risks on par with those posed by pandemics and nuclear war.

The letter, co-signed by over 350 individuals, emphasized the need for prioritizing efforts to mitigate the risks associated with AI's potential impact on humanity. The signatories, which included Geoffrey Hinton and Yoshua Bengio, renowned AI researchers and recipients of the 2018 Turing Award, as well as professors from esteemed institutions like Harvard and Tsinghua University, urged policymakers to acknowledge AI's societal-scale risks.



However, the absence of Meta, the company where AI pioneer Yann LeCun works, from the list of signatories drew attention. CAIS director Dan Hendrycks stated that Meta employees were invited to sign the letter but did not provide an immediate comment on the matter.

The publication of the letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden, where discussions around AI regulation were expected to take place. This move follows an earlier statement in April by Elon Musk and a group of AI experts and industry executives, who highlighted potential risks associated with AI.

Recent advancements in AI have led to the development of tools with applications in various fields, including medical diagnostics and legal brief writing. However, concerns have been raised regarding privacy violations, misinformation campaigns, and the emergence of "smart machines" capable of autonomous decision-making.

The warning from AI experts comes just two months after the nonprofit Future of Life Institute (FLI) released a similar open letter, signed by Musk and numerous others, calling for a pause in advanced AI research due to risks posed to humanity.

Max Tegmark, president of FLI and signatory of both letters, noted that the recent letter on AI's extinction risks has mainstreamed the conversation surrounding AI's potential dangers. Tegmark expressed hope that this would facilitate a constructive and open dialogue on the subject.

Renowned AI pioneer Geoffrey Hinton previously stated in an interview with Reuters that AI could pose a "more urgent" threat to humanity than climate change, underscoring the gravity of the issue.

The letter's publication follows OpenAI CEO Sam Altman's initial criticism of EU AI regulation efforts as over-regulation, which he later reversed after facing political backlash. Altman's role as a prominent figure in the AI industry has been solidified by the success of OpenAI's ChatGPT chatbot, which gained worldwide attention.

Altman is scheduled to meet European Commission President Ursula von der Leyen and EU industry chief Thierry Breton in the coming weeks to discuss AI-related matters.

The collective concerns expressed by top AI executives and experts regarding the risk of AI-induced extinction underscore the need for comprehensive and thoughtful approaches to AI regulation, while ensuring the benefits of the technology are harnessed responsibly for the betterment of society.

Disclaimer: The above article is based on available information at the time of writing and may be subject to further updates and developments.

22 views0 comments
bottom of page