Beckhoff

Could AI really be the asteroid that wipes out humanity? These experts think so

Author : Sophia Bell, Group Editor, Connectivity

01 June 2023

(Image: Shutterstock)
(Image: Shutterstock)

A group of leading experts, including the heads of OpenAI and Google DeepMind, have issued a stark warning that AI is just as dangerous as pandemics and nuclear wars in the “extinction” threat it poses to humanity.

iIn a joint statement published by the Centre for AI Safety, experts in the field of artificial intelligence (AI) have raised concerns about the potential risks posed by AI technology. 


The statement, which has garnered support from prominent figures – including the heads of OpenAI (the founder of ChatGPT), Google DeepMind and Anthropic – asserts that the risk of extinction from AI should be treated as a global priority, alongside other major threats, such as pandemics and nuclear war.


The Centre for AI Safety's website outlines several potential disaster scenarios associated with AI technology. These include:


• Weaponisation: AI could be repurposed for destructive purposes, such as repurposing tools developed for drug delivery for creating chemical weapons


• Misinformation: AI-generated misinformation could “undermine collective decision-making” and societal progress


• Proxy gaming: AI systems trained with flawed objectives may pursue goals that conflict with human values, potentially sacrificing individual and societal well-being


• Enfeeblement: Increasing reliance on AI could lead to humans losing control and becoming dependent on machines, akin to the scenario depicted in the film WALL-E


• Value lock-in: Highly competent AI systems controlled by a few could perpetuate oppressive systems, shaping future values and limiting freedom


The call for action has received support from notable figures in the field of AI, including Dr Geoffrey Hinton, who previously issued a warning about the risks associated with AI, and Yoshua Bengio, Professor of Computer Science at the University of Montreal. These leading pioneers, often referred to as the "godfathers of AI," collectively won the prestigious Turing Award in 2018 for their contributions to computer science.


However, not everyone agrees with the doomsday predictions surrounding AI. 


Arvind Narayanan, a Computer Scientist at Princeton University, argues that the current capabilities of AI are insufficient for the predicted risks to materialise, arguing the importance of addressing the more immediate ethical and societal challenges posed by the technology.


Meanwhile, Elizabeth Renieris, AI Senior Research Associate at Oxford's Institute for Ethics, told BBC News that we need to focus on the risk of widespread misinformation and increased inequality.


"Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable," she said. They would "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide".


The discussion surrounding AI regulation and risk mitigation has gained momentum in recent months. Tesla CEO Elon Musk joined a group of experts in calling for a temporary pause in the development of large language AI models, citing profound risks associated with the rapidly developing technology.


In response to growing concerns, governments and industry leaders around the world have started to address the need for AI regulation. British Prime Minister Rishi Sunak has engaged in discussions with CEOs of major AI companies, to explore the establishment of appropriate safeguards. The recent G7 summit also saw the creation of a working group focused on AI.


While the debate regarding the level of risk posed by AI rages on, the Center for AI Safety's call for prioritising the mitigation of extinction risks posed by the technology has added fuel to the ongoing discussion surrounding its impact on humanity’s future. 


Whether you agree that AI poses an existential threat, or you believe that such warnings are mere fearmongering and exaggeration, one thing is clear: balancing the potential benefits with the need to ensure responsible and ethical development will be crucial to ensure that we shape a future in which AI enhances humanity rather than threatens it.


Print this page | E-mail this page


Tesys 100

This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.