Geoffrey Hinton, Godfather of AI, Resigns from Google to Speak Out Against AI Dangers
|Geoffrey Hinton, widely regarded as the godfather of AI, recently resigned from Google to voice his concerns about the dangers of generative AI products like OpenAI’s ChatGPT and Google’s Bard. Hinton, a professor at the University of Toronto, is the creator of the neural network technology that companies use to train AI products like ChatGPT. However, he is no longer as optimistic about the future of AI as he once was, and worries about the immediate and long-term risks that AI poses to society.
In a recent interview with The New York Times, Hinton’s illustrious career was briefly recapped. He began working on neural networks in 1972 as a graduate of the University of Edinburgh and was a professor at Carnegie Mellon University in the 1980s. However, he moved to Canada to avoid having AI technology involved in weapons. In 2012, Hinton and two of his students created a neural network that could analyze thousands of photos and learn to identify common objects. One of those students, Ilya Sutskever, became the chief scientist at OpenAI in 2018, the same company that created ChatGPT.
Google purchased the company that Hinton and his two students started for $44 million. Hinton then spent over a decade at Google refining AI products. Nonetheless, he has now left Google in order to speak out about the potential hazards of generative AI.
The sudden emergence of ChatGPT and Microsoft’s rapid integration of the technology in Bing sparked a new competition with Google. While Geoffrey Hinton did not appreciate this competition, he chose not to speak about the dangers of unregulated AI while he was still a Google employee.
Hinton believes that tech giants are now in an AI arms race that may be difficult to halt. His immediate concern is that ordinary people will no longer be able to discern what is true as generative photos, videos, and text from AI products inundate the web.
Moreover, Hinton fears that AI may replace human workers in jobs that involve repetitive tasks. Looking ahead, he worries that AI may eventually generate and run its own code, which could be disastrous for humanity.
“The idea that this technology could actually become more intelligent than humans was once believed by only a few people. But most thought it was still far off, and I believed it was 30 to 50 years or even longer before that would happen. Obviously, I no longer think that,” said the former Google employee.
Hinton made it clear on Twitter that he did not leave Google to criticize the company he worked for until last week. He acknowledged that Google had acted responsibly so far in AI matters.
Hinton expressed his hope that tech companies would act responsibly and take measures to prevent AI from becoming uncontrollable, as he told The Times. However, regulating the AI space might be more difficult than anticipated, given that some companies may be developing the technology behind closed doors.
The former Google employee stated in the interview that he found solace in the “typical excuse: if I hadn’t done it, someone else would have.” Hinton also used to paraphrase Robert Oppenheimer when questioned about how he could work on technology with the potential to be so dangerous: “When you see something that is technically sweet, you go ahead and do it.”