Geoffrey Hinton says ‘bad actors’ could harness artificial intelligence for ‘bad things’
Turing Award-winning scientist Geoffrey Hinton is credited with being a foundational figure in the advent of Artificial Intelligence (AI), but amid a de-facto arms race in Silicon Valley as Google and Microsoft work against one another to perfect the technology, he has warned of the risks that his life’s work may present to humanity.
Hinton resigned from Google last month, where he had spent much of the past decade developing generative artificial-intelligence programs.
This technology has formed the basis for generative artificial-intelligence software such as ChatGPT and Google Bard, as tech-sector giants dip their toes into a new scientific frontier, one they expect to form the basis of their companies’ futures.
Hinton’s motivation for leaving Google, he told the New York Times in a lengthy interview published on Monday, was so he could speak without oversight about technology that he now views as posing a danger to mankind.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the US newspaper.
Public-facing chatbots such as ChatGPT have provided a glimpse into Hinton’s concern.
While they are viewed by some as just more internet novelties, others have warned of the potential ramifications as they relate to the spread of online misinformation, and of their impact on employment.
The latest version of ChatGPT, released in March by San Francisco’s OpenAI, prompted the publication of an open letter signed by more than 1,000 tech-sector leaders – including Elon Musk – to highlight the “profound risks to society and humanity” that the technology poses.
And while Hinton didn’t add his signature to the letter, his stance on the potential misuse of AI is clear: “It’s hard to see how you can prevent the bad actors from using it for bad things.”
Hinton maintains that Google has acted “very responsibly” in its stewardship of artificial intelligence but eventually, he says, the technology’s proprietors might inevitably lose control.
This could lead to a scenario, he says, where false information, photos and videos are indeterminable from real information, and lead to people not knowing “what is true anymore.”
“The idea that this stuff could actually get smarter than people – a few people believed that,” Hinton told the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Published by Rt.com
Republished by The 21st Century
The views expressed in this article are solely those of the author and do not necessarily reflect the opinions of 21cir.com