In recent months, many tech experts have raised alarm bells about the potential dangers of artificial intelligence (AI). As interest in the AI field continues to grow worldwide, some experts fear that the technology could pose an existential threat to humanity, potentially leading to the destruction of civilization.
Elon Musk, CEO of Twitter, shared these concerns. In a recent interview with Tucker Carlson, Musk stated that AI was more dangerous than a “mismanaged aircraft” and could potentially lead to “civilization destruction.”
However, Yann LeCun, Meta’s Chief AI Scientist, holds a different perspective. He believes that the assumption of AI as an existential threat is unfounded and “completely false.”
During a podcast with venture capitalist Harry Stebbings, LeCun was described by Business Today as saying, “It makes an assumption which Elon and some other people may have become convinced of by reading Nick Bostrom’s book ‘Superintelligence’ or reading you know some of Eliezer Yudkowsky’s writing.”
LeCun goes on to say that the idea of AI posing an existential threat is based on a fallacy known as “hard take-off.” According to this hypothesis, once a super-intelligent AI system is launched, it will constantly develop itself and eventually surpass human intelligence, potentially destroying the Earth.
“That is completely absurd because there is no process in the real world that is exponential for an extended period of time.” These systems will need to attract all of the world’s resources. “They would have to be given unlimited power and agency,” LeCun explained.
According to the AI expert, just because AI systems get more clever does not mean they will want to control people.
“It has nothing to do with intelligence.” They must be designed in such a way that they want to take over. Systems will not take over simply because they are intelligent. Even within the human species, the most brilliant individuals do not seek to control others,” LeCun added.
Other experts in the field of AI have also expressed concerns about its potential dangers.
OpenAI CEO Sam Altman has admitted being wary of his creation, calling for AI regulation in front of US lawmakers. Sundar Pichai, CEO of Google, has emphasised the responsible use of AI and acknowledged that AI’s rapid advancement keeps him concerned.
Another such example is Geoffrey Hinton, known as the “Godfather of AI,” who even resigned from his position at Google to raise awareness about the dangers of AI.