Sam Altman signs call for AI regulation
Sam Altman, CEO of OpenAI and developer of ChatGPT, signs a call for compliance with comprehensive regulations regarding artificial intelligence (AI). He is joined by 376 other employees of major companies in the AI industry, as well as numerous scientists and professors, who sign. Among them is the head of Google Deepmind, Demis Hassabis.
The American non-governmental organization “Center of AI Safety” publishes this appeal. Its mission is to minimize the risk of AI to humanity.
The Center of AI Safety says, “Reducing the risk of annihilation by AI should be a global priority alongside other societal-scale risks, such as pandemics or nuclear war.” Accordingly, the call is taken as a warning.
Dangers of AI
Since Altman introduced ChatGPT as an AI language model last November, competition has been fierce in the AI industry.
At the same time, discussions about the negative consequences of AI are increasing. Advances in AI systems, among other things, could make staffed jobs obsolete.
As Sam Altman expresses to the U.S. Congress, “If something goes wrong with this technology, it could go pretty wrong.”
About the call
Back in March 2023, a letter from the Future of Life Institute was signed by Elon Musk, CEO of Tesla and co-founder of OpenAI. This resulted in a six-month research pause.
This time, he said, no research pause was intentionally scheduled. According to Dan Hendrycks, director of the Center of AI Safety, disagreements about dangers or possible solutions should remain in the background. Instead, it is hoped that experts in the AI industry will also voice their concerns publicly, rather than only in private. The call is more general than the letter in terms of wording, he said.
AI startups see the call as critical because they fear that they will only be able to develop further to a limited extent.