Geoffrey Hinton, a pioneer in the field of artificial intelligence, often referred to as the "Father of AI" and a 2024 Nobel Laureate in Physics, is again raising concerns about the potential dangers of AI, asserting that many large tech companies are downplaying the risks.
Hinton voiced his concerns on the "One Decision" podcast, stating that while some individuals within the AI community are aware of the dangers, this awareness isn't always reflected in their public stance. He singled out Demis Hassabis, CEO of Google DeepMind, as an exception, recognizing Hassabis's understanding of the serious risks and commitment to addressing the potential for AI to be exploited by malicious actors. According to Hinton, Hassabis "really wants to do something about it".
Hinton, who spent over a decade at Google before resigning in May 2023 to speak more freely about AI's dangers, has expressed unease about the rapid advancements in AI. He emphasized that AI systems are learning in ways humans don't fully understand, posing risks if not properly managed. He also noted the pace at which AI systems have begun to function is exceeding expectations.
Hinton's concerns center around the potential for AI to be misused or operate beyond human control. He estimates a 10% to 20% risk that AI will eventually take control from humans. These concerns echo those of other industry leaders, including Google CEO Sundar Pichai, X-AI's Elon Musk, and OpenAI CEO Sam Altman. However, Hinton criticizes companies for prioritizing profits over safety, noting their lobbying efforts to reduce AI regulation.
Hinton also addressed the narrative surrounding his departure from Google. While his resignation led to speculation that he was protesting the company's AI strategies, Hinton clarified that the media's portrayal of the situation was overblown. He explained that his decision to leave was partly due to his age and a desire to speak more openly about the risks of AI.
Hinton has long advocated for responsible AI development, even calling for global cooperation to establish safety limits. He suggests AI systems can be taught morality, similar to educating a child. He also believes AI companies should dedicate a significant portion of their computing power to safety research, suggesting "like a third" of their computing power, contrasting with the much smaller fraction currently allocated.
Hinton's warnings extend to the potential for large-scale job displacement due to AI systems, particularly in "mundane intellectual labor" and routine office and administrative work. He suggests AI could easily outperform humans in such roles. He has also voiced concerns about the deliberate misuse of AI by malicious actors, stating that "it is hard to see how you can prevent the bad actors from using [AI] for bad things". In 2017, Hinton called for an international ban on lethal autonomous weapons.