UK Government’s AI framework is too limited, claim experts
27 April 2023
(Image: Shutterstock)
University of Warwick academics state that the Government’s white paper, which recommends a framework for the regulation of AI, needs to address the lack of the data and methods used to create it, rather than only focusing on AI’s application in specific areas.
Shaping AI is an ongoing international social science research study that examines public debates about artificial intelligence in four countries across a 10-year period (2012-2022).
Academics at the University of Warwick are examining research controversies in AI and analysing expert perceptions in the UK, in collaboration with partners undertaking similar research in North America and Europe.
The University of Warwick team consulted with 70 UK experts in AI and in ‘AI and society’ about what they perceive to be the most important and most overlooked controversies in AI.
The consultation identified facial recognition technology as a major area of concern, and its application in society, for example, its use in schools and by the police.
However, the most controversial developments identified by UK experts concern the underlying technical architecture of contemporary AI and how it is currently controlled by a limited number of powerful tech companies.
The experts said that people should be most concerned about the lack of public knowledge and oversight around the origins of the data AI is trained on; for example, where the data comes from and whether consent has been obtained to use that data.
They also highlighted the human and environmental costs of training and deploying large AI models like ChatGPT, which relies on large amounts of both freely available and copyrighted data along with inexpensive human labour and in addition, is highly energy intensive.
During a recent workshop, the University of Warwick researchers presented the results of this consultation and an analysis of the main AI controversies identified, and worked with 30 experts to evaluate the findings and discuss what society should be most concerned about in the years to come.
Professor of Science, Technology and Society, Noortje Marres, said: “Ultimately, it is the lack of transparency and oversight over the data and methods that AI is built on that should be the focus of society’s concerns, rather than only the application of AI in specific contexts.
“Our analysis found that the challenges associated with AI are well-known in certain contexts and among diverse constituencies, from industry, science, activism, and that increasingly, the public participates in debate around AI, but the issues discussed around it are very hard to resolve because there is a lack of transparency and oversight around the fundamental structure of AI in its development and deployment.”
Academics say that if these concerns are not addressed, there could be huge ramifications for quality control for science, innovation and ultimately critical infrastructure in society in the future.
Professor Marres continues: “There is always accountability in science, whenever research is conducted there are established ethics frameworks and data protection frameworks to comply with, and scientists are required to be transparent about how they are conducting their research and what data they are using.
“AI is scaling up and developing at pace and changing lots of different aspects of society. The lack of knowledge and transparency around data and methods under development in the AI industry means that scientific and technological developments cannot fully adhere to current regulations.
“The UK Government needs to recognise this new type of risk and the changes to science, innovation and society that are happening as a result.”
The report, Shifting AI controversies: How do we get from the AI controversies we have to the controversies we need?, is published online.
The Shaping AI project continues until February 2024.