Predictive Maintenance

AI Safety Summit 2023: Here’s everything you need to know

Author : Sophia Bell, Group Editor, Connectivity

10 November 2023

Image: Kirsty O'Connor / No 10 Downing Street
Image: Kirsty O'Connor / No 10 Downing Street

On 1-2 November, Bletchley Park, the historic home of WWII codebreakers, played host to the first-ever global conference on AI. Here, Connectivity’s Group Editor breaks down what was agreed upon, what it fell short on, and what's coming next for the world of artificial intelligence.

The AI Safety Summit brought together approximately 120 representatives from leading AI companies, academics and civil society groups, from around the world. This landmark event sought to address crucial issues surrounding artificial intelligence and its potential impact on society.

More than 25 countries, including the US, China and the EU, signed the historic ‘Bletchley Declaration’. This emphasised the necessity for countries to collaborate and establish a common approach to oversee AI. 

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” the agreement said. 

“We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all.”

What were the Summit's successes?

The UK's act of bringing the US and China together to pursue a common goal, has been considered by some to be a diplomatic triumph. It also marked the first time that China had met with Western governments to discuss AI-related issues.

China's Vice Minister of Technology, Wu Zhaohui, pledged to contribute to an “international mechanism [on AI], broadening participation, and a governance framework based on wide consensus delivering benefits to the people, and building a community with a shared future for mankind”.

As a part of the event, the UK announced the launch of the world's first AI Safety Institute. Its mission is to enhance public sector capabilities for safety testing and AI research, addressing risks, from social harms (such as bias and misinformation) to the most extreme AI-related risks, including the potential for loss of control. 

Safety is of paramount importance in the development of this technology. A recent survey from global foundation Luminate found that four in 10 people in the UK believe that AI is being developed and deployed at an unsafe pace. The survey also revealed that the majority support slowing down its deployment to prioritise safety, with 71 percent in favour.

The UK’s AI Safety Institute plans to collaborate with companies in testing their frontier AI models prior to deployment and to partner with other countries' AI Safety Institutes, including the US.

Image: Ben Dance / No 10 Downing Street
Image: Ben Dance / No 10 Downing Street

The UK Government also unveiled its £225 million investment in a new AI supercomputer to be built at the University of Bristol. Named Isambard-AI, after the 19th-century British engineer Isambard Brunel, this supercomputer is expected to be 10 times faster than the UK's current fastest machine and, alongside another supercomputer called Dawn, is intended to make breakthroughs in fusion energy, healthcare, and climate modelling.

During the conference, King Charles III sent in a video speech, describing artificial intelligence as “one of the greatest technological leaps in the history of human endeavour” that could help “hasten our journey towards net zero and realise a new era of potentially limitless clean green energy”.

However, he warned: “We must work together on combatting its significant risks too.”

Does AI really pose an existential threat?

Artificial intelligence has increasingly been making the headlines in recent months for the alleged existential threat that it poses to humanity. As a result, much of the discussion of the Summit unsurprisingly revolved around what is termed ‘frontier’ AI – that is, highly advanced AI models with the potential to outstrip human control, particularly in domains like cybersecurity and biotechnology. 

“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” the Bletchley Declaration said.

What criticism did the Summit face?

Despite its achievements, critics nonetheless expressed concern about the Summit's focus on hypothetical existential threats rather than addressing more immediate and pressing issues, such as AI's impact on job displacement, the development of deepfakes, and its use by insurance companies to determine risk.

An open letter signed by representatives from 11 civil society groups highlighted the exclusion of various perspectives and the need for more inclusive discussions on the full range of AI risks.

“While potential harms of ‘frontier’ models may have motivated the Summit, existing AI systems are already having significant harmful impacts on people’s rights and daily lives,” the letter argued.

Image: Image: Kirsty O'Connor / No 10 Downing Street
Image: Image: Kirsty O'Connor / No 10 Downing Street

The letter further warned against concentrating the power to decide on AI regulation in the hands of a few industry leaders and urged the need for a more democratic approach: “It is critical that AI policy conversations bring a wider range of voices and perspectives into the room, particularly from regions outside of the Global North,” it said.

“Framing a narrow section of the AI industry as the primary experts on AI risks further concentrating power in the tech industry, introducing regulatory mechanisms not fit for purpose, and excluding perspectives that will ensure AI systems work for all of us.”

Such fears were fanned by tech billionaire Elon Musk during his 50-minute conversation with Rishi Sunak, who claimed that artificial intelligence will make the human workforce obsolete. 

"We are seeing the most disruptive force in history here," he said. "There will come a point where no job is needed – you can have a job if you want one for personal satisfaction, but AI will do everything.

"It's both good and bad – one of the challenges in the future will be how do we find meaning in life."

Sunak’s view of AI, however, was more mixed. Whilst he warned that the technology has the potential to be as lethal as a pandemic or a nuclear war, he also said he doesn’t believe it is going to replace human workers. Instead, he claimed, it can be used more as a “co-pilot”, helping “almost everybody” to “do their jobs better, faster, quicker”. 

Special relationship: Storm in a teacup?

While the Summit fostered international collaboration, it also laid bare the underlying international tensions and competition surrounding AI development.

Prime Minister Rishi Sunak celebrated the Summit as a British triumph, emphasising the UK's leadership in shaping a safe and beneficial AI future. However, the US, through Secretary of Commerce Gina Marie Raimondo, has potentially called this into question by declaring its intention to launch a separate AI Safety Institute. Involving more than 25 government agencies, it will seek to address today’s AI-related issues, such as privacy, security, discrimination, and disinformation.

Image: Simon Dawson / No 10 Downing Street
Image: Simon Dawson / No 10 Downing Street

Raimondo explained that the US institute, managed by the Department of Commerce and the US National Institute of Standards and Technology (NIST), would not be part of the global AI safety initiative envisioned by the UK. However, a formal partnership between the UK and the US was established, with opportunities for the two institutes to work together in future.

During the Summit, US Vice President Kamala Harris held a separate press conference in London, emphasising her belief that it will be the US that will lead the way in AI. “Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way that no other country can.”

What’s next?

The survey from Luminate revealed that 41 percent of people in the UK possess a limited understanding of AI technology. It was therefore crucial for the Summit to serve as a platform for global dialogue and help raise awareness of the risks that AI poses, as well as the opportunities it presents.

The greatest achievement of the AI Summit was undoubtedly the Bletchley Declaration, which signified a general international consensus on the need for collective solutions. Attaining such consensus in today's politically charged climate is certainly an achievement in itself.

Nevertheless, while the Summit effectively acknowledged AI's potential threats, it arguably fell short of putting comprehensive, practicable strategies in motion, to tackle it and ensure that the technology is used for the benefit, rather than the detriment, of the international community.   

Gary Marcus at the Center for the Advancement of Trustworthy AI, applauded the success of the Bletchley Declaration, but argued that more needs to be done: “We urgently need to move past position statements – there have been a lot of those in recent months – and into concrete proposals about what to do next.”

The next summit will take place in South Korea in six months, followed by another in France in a year’s time. It’s clear that the Summit’s future editions will need to pivot towards actionable proposals, if regulation is to keep pace with innovation.


Print this page | E-mail this page


Predictive Maintenance

This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.