AI can outsmart humans and lead to extinction, pandemic, and nuclear war. – Experts.

AI can outsmart humans and lead to extinction, pandemic, and nuclear war. – Experts.

On Tuesday, top executives from Microsoft and Google joined scientists in issuing a fresh warning about the dangers that artificial intelligence poses to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

The declaration was signed by hundreds of influential people, including Geoffrey Hinton, the computer scientist renowned as the pioneer of artificial intelligence, and Sam Altman, the CEO of ChatGPT creator OpenAI.

With the emergence of a new breed of extremely intelligent AI chatbots like ChatGPT, concerns about artificial intelligence systems outsmarting people and going berserk have increased. Countries all around the world are rushing to create legislation for emerging technology as the European Union is setting the standard with its AI Act, which is anticipated to be passed later this year.

According to Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which spearheaded the initiative, the latest warning was purposefully brief — just one sentence — to cover a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them.

People from all the best colleges and across a wide range of professions are concerned about this and believe it to be a worldwide priority, according to Hendrycks. We had to persuade folks to sort of open up about this topic since many were essentially talking among themselves in silence.

Elon Musk was one of more than 1,000 scientists and technicians who had earlier this year signed a longer letter asking for a six-month moratorium on AI development because it posed “profound risks to society and humanity.”

This letter was written in reaction to OpenAI’s publication of the GPT-4 AI model, but executives from OpenAI, Microsoft, and competitor Google declined to sign on and opposed the proposal for a voluntary industry pause.

In contrast, the most recent declaration was supported by Microsoft’s chief technology and science officer, Demis Hassabis, CEO of Google’s DeepMind AI research center, and two Google officials who are in charge of the company’s AI policy initiatives. The declaration doesn’t offer any concrete solutions, but some, like Altman, have suggested a global regulator modeled after the U.N. nuclear agency.

Some detractors have argued that the apocalyptic predictions of existential dangers made by AI product manufacturers have served to exaggerate the capabilities of their goods and divert attention from calls for more immediate legislation to control their real-world issues.

It is not impossible, according to Hendrycks, for society to control the “urgent, ongoing harms” caused by products that produce new text or images while simultaneously beginning to address the “potential catastrophes around the corner.”

He contrasted it to nuclear scientists cautioning people to exercise caution in the 1930s despite the fact that “we haven’t quite developed the bomb yet.”

Nobody is claiming that GPT-4 or ChatGPT is currently giving rise to these kinds of worries, Hendrycks added. “Instead of trying to deal with disasters after they happen, we’re trying to address these risks before they happen.”

Additionally, specialists in the fields of climate change, pandemics, and nuclear research signed the statement. The author Bill McKibben, who published “The End of Nature” in 1989 and previously issued warnings about artificial intelligence and related technologies, is one of the signatories.

He wrote in an email to The Daily Beast on Tuesday, “Given our failure to heed the early warnings about climate change 35 years ago, it feels to me as if it would be smart to actually think this one through before it’s all done.”

A scientist who worked to get the letter written claimed that he used to get ridiculed for his worries about the existential risk posed by AI, despite the fact that machine learning research has advanced at a rate that has surprised many people over the past ten years.

Scientists are reluctant to speak up because they don’t want to be perceived as advocating for AI “consciousness or AI doing something magical,” according to David Krueger, an assistant professor of computer science at the University of Cambridge. However, he argued that AI systems don’t need to be self-aware or have their own objectives in order to be dangerous to humans.

“I’m not devoted to a specific kind of risk. I believe there are many different ways for things to go wrong,” stated Krueger. The threat of extinction, specifically from AI systems that spiral out of control, is, in my opinion, the one that has historically generated the most controversy.

Facebook20k
Twitter60k
100k
Instagram500k
600k