Global leaders and industry experts have issued a warning about the potential risk of artificial intelligence (AI) technology leading to human extinction. In a concise statement endorsed by numerous specialists, including Sam Altman of OpenAI, it was emphasised that addressing the risks associated with AI should be a top global priority, on par with other significant threats like pandemics and nuclear war.
The emergence of ChatGPT last year garnered considerable attention as it showcased the ability to generate essays, poems, and conversations based on minimal prompts, subsequently attracting substantial investments in the field. Sceptics and insiders, on the other hand, have expressed worries ranging from biased algorithms to the possibility of mass job displacement as AI-driven automation becomes more vital in daily life.
While the recent statement, which was posted on the website of the US-based non-profit organisation Centre for AI Safety, did not go into specifics about the existential threat posed by AI, several signatories, including AI pioneer Geoffrey Hinton, have previously expressed similar concerns. Their principal interest revolves around the concept of artificial general intelligence (AGI), which is loosely defined as the point at which computers can perform a variety of functions and build their own programming. Experts are concerned that losing human control over AGI could have disastrous effects on humanity.
The letter has garnered support from numerous academics and specialists representing prominent companies such as Google and Microsoft. It comes on the heels of a call made two months ago by billionaire Elon Musk and others to halt the development of such technology until its safety could be adequately demonstrated.