Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the neve domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/vhosts/sigmacybersecurity.com/httpdocs/wp-includes/functions.php on line 6114
Musk, Scientists Call for Halt to AI Race Sparked by ChatGPT "5 Tips for Buying Secondhand Furniture" "Five Strategies for Purchasing Pre-Owned Furniture" - Sigma Cyber Security
Skip to content

Musk, Scientists Call for Halt to AI Race Sparked by ChatGPT “5 Tips for Buying Secondhand Furniture” “Five Strategies for Purchasing Pre-Owned Furniture”

Are tech companies moving too fast with the rollout of powerful artificial intelligence (AI) technology? That’s the question a group of prominent computer scientists and tech industry notables, including Elon Musk and Apple co-founder Steve Wozniak, are asking with a petition for a 6-month pause to consider the risks. The petition, published Wednesday, is in response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT.

The letter warns that human-competitive AI systems pose “profound risks to society and humanity” – from flooding the internet with disinformation and automating away jobs to more catastrophic future risks. It also says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The petition calls on all AI labs to pause the training of systems more powerful than GPT-4 for the next 6 months, and governments to institute a moratorium if this cannot be done quickly.

The petition was organized by the nonprofit Future of Life Institute and signed by AI pioneers such as Yoshua Bengio, Stuart Russell, and Gary Marcus. Other signatories include Elon Musk, Andrew Yang, and Rachel Bronson of the Bulletin of the Atomic Scientists. Surprisingly, Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion, also signed.

OpenAI, Microsoft, and Google have yet to respond to requests for comment. Meanwhile, James Grimmelmann of Cornell University says the letter is “vague and doesn’t take the regulatory problems seriously” and that it is “deeply hypocritical” for Elon Musk to sign given Tesla’s fight against accountability for its self-driving cars.

Gary Marcus, a professor emeritus at NYU, explains that while the letter raises the specter of nefarious AI far more intelligent than what actually exists, he’s more worried about “mediocre AI” that’s widely deployed.

The debate continues as to whether tech companies are moving too fast with the rollout of powerful AI technology, and the potential risks it poses to society. AI pioneers and tech industry notables are calling for a 6-month pause to consider these risks, but the letter has its skeptics. Governments are also stepping in to regulate high-risk AI tools, with the UK having recently released its approach.

Key Points:
• A group of prominent computer scientists and tech industry notables, including Elon Musk and Steve Wozniak, are calling for a 6-month pause to consider the risks of powerful AI technology.
• The letter warns that human-competitive AI systems pose “profound risks to society and humanity” – from flooding the internet with disinformation and automating away jobs to more catastrophic future risks.
• The petition was organized by the nonprofit Future of Life Institute and signed by AI pioneers such as Yoshua Bengio and Gary Marcus.
• James Grimmelmann of Cornell University says the letter is “vague and doesn’t take the regulatory problems seriously” and that it is “deeply hypocritical” for Elon Musk to sign given Tesla’s fight against accountability for its self-driving cars.
• Gary Marcus, a professor emeritus at NYU, explains that while the letter raises the specter of nefarious AI, he’s more worried about “mediocre AI” that’s widely deployed.
• Governments are also stepping in to regulate high-risk AI tools, with the UK having recently released its approach.

Leave a Reply

Your email address will not be published. Required fields are marked *