‘EVERYONE on EARTH WILL DIE,’ Top AI Researcher Warns

Humanity is unprepared to survive an encounter with a much smarter artificial intelligence, Eliezer Yudkowsky says

Shutting down the development of advanced artificial intelligence systems around the globe and harshly punishing those violating the moratorium is the only way to save humanity from extinction, a high-profile AI researcher has warned.

Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI), has written an opinion piece for TIME magazine on Wednesday, explaining why he didn’t sign a petition calling upon “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” a multimodal large language model, released by OpenAI earlier this month.

Yudkowsky argued that the letter, signed by the likes of Elon Musk and Apple’s Steve Wozniak, was “asking for too little to solve” the problem posed by rapid and uncontrolled development of AI.

“The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” Yudkowsky wrote.

Musk demands AI pause

Read more

Musk demands AI pause

Surviving an encounter with a computer system that “does not care for us nor for sentient life in general” would require “precision and preparation and new scientific insights” that humanity lacks at the moment and is unlikely to obtain in the foreseeable future, he argued.

“A sufficiently intelligent AI won’t stay confined to computers for long,”Yudkowsky warned. He explained that the fact that it’s already possible to email DNA strings to laboratories to produce proteins will likely allow the AI “to build artificial life forms or bootstrap straight to postbiological molecular manufacturing” and get out into the world.

According to the researcher, an indefinite and worldwide moratorium on new major AI training runs has to be introduced immediately. “There can be no exceptions, including for governments or militaries,” he stressed.

International deals should be signed to place a ceiling on how much computing power anyone may use in training such systems, Yudkowsky insisted.

“If intelligence says that a country outside the agreement is building a GPU (graphics processing unit) cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike,” he wrote.

The threat from artificial intelligence is so great that it should be made “explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange,” he added.

 

 

By Rt.com

Published by Rt.com

 

 

Republished by The 21st Century

The views expressed in this article are solely those of the author and do not necessarily reflect the opinions of 21cir.com

 

 

 

Sharing is caring!

Leave a Reply