Nearly three-quarters of the UK public say that introducing laws to regulate artificial intelligence (AI) would increase their comfort with the technology, amid rising public concern over the implications of its roll-out.
In response to a national survey of more than 3,500 UK residents conducted by the Ada Lovelace and Alan Turing Institutes – which asked people about their awareness and perceptions of different AI use cases, as well as their experiences of AI-related harms – the vast majority (72%) said laws and regulations would make them more at ease with the proliferation of AI technologies.
Nearly nine in 10 said they believed it is important that the government or regulators have the power to halt the use of AI products deemed to pose a risk of serious harm to the public, while over 75% said government or independent regulators – rather than private companies alone – should oversee AI safety.
The institutes also found that people’s exposure to AI harms is widespread, with two-thirds of the public reporting encounters with various negative impacts of the technology. The most reported harms were false information (61%), financial fraud (58%) and deepfakes (58%).
The survey also found support for the right to appeal against AI-based decisions and for more transparency, with 65% saying that procedures for appealing decisions and 61% saying more information about how AI has been used to make a decision would increase their comfort with the tech.
However, the institutes said the rising demand for AI regulation is coming at a time when the UK does not have a set of comprehensive regulations around the technology.
In a report accompanying the survey findings, the institutes added while they welcome the recognition in the UK’s AI Opportunities Action Plan that “government must protect UK citizens from the most significant risks presented by AI and foster public trust in the technology, particularly considering the interests of marginalised groups”, there are no specific commitments on how to achieve this ambition.
“This new evidence shows that, for AI to be developed and deployed responsibly, it needs to take account of public expectations, concerns and experiences,” said Octavia Field Reid, associate director at the Ada Lovelace Institute, adding that the government’s legislative inaction on AI now stands in direct contrast to public concerns about the tech and their desire to see it regulated.
“This gap between policy and public expectations creates a risk of backlash, particularly from minoritised groups and those most affected by AI harms, which would hinder the adoption of AI and the realisation of its benefits. There will be no greater barrier to delivering on the potential of AI than a lack of public trust.”
According to the survey – which purposefully oversampled social marginalised groups, including people from low-income backgrounds and minoritised ethnic groups – attitudes to AI vary greatly between different demographics, with traditionally underrepresented populations reporting more concerns and perceiving AI as less beneficial. For example, 57% of black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the wider population.
Across all of the AI use cases asked about in the survey, people on lower incomes perceived them as less beneficial than people on higher incomes.
In general, however, people across all groups were most concerned about the use of their data and representation in decision-making, with 83% of the UK public saying they are worried about public sector bodies sharing their data with private companies to train AI systems.
Asked about the extent to which they felt their views and values are represented in current decisions being made about AI and how it affects their lives, half of the public said that they do not feel represented.
“To realise the many opportunities and benefits of AI, it will be important to build consideration of public views and experiences into decision-making about AI,” said Helen Margetts, programme director for public policy at the Alan Turing Institute.
“These findings suggest the importance of government’s promise in the AI Action Plan to fund regulators to scale up their AI capabilities and expertise, which should foster public trust. The findings also highlight the need to tackle the differential expectations and experiences of those on lower incomes, so that they gain the same benefits as high income groups from the latest generation of AI.”
In their accompanying report, the institutes said to ensure the introduction of AI-enabled systems in public sector services works for everyone, policymakers must engage and consult the public to capture the full range of attitudes expressed by different groups.
“Capturing diverse perspectives may help to identify high-risk use cases, novel concerns or harms, and/or potential governance measures that are needed to garner public trust and support adoption,” it said.
Although people’s inclusive participation in both the public and private management of AI systems is key to making the technology work for the benefit of all, Computer Weekly has previously reported that there are currently no avenues to meaningful public engagement.
According to the government chief scientific adviser Angela McClean, for example, there are no viable channels available to the public that would allow them to have their voices heard around matters of science and technology.
In September 2024, a United Nations (UN) advisory body on AI also highlighted the need for governments to collaborate on the creation of a “globally inclusive and distributed architecture” to govern the technology’s use.
“The imperative of global governance, in particular, is irrefutable,” it said. “AI’s raw materials, from critical minerals to training data, are globally sourced. General-purpose AI, deployed across borders, spawns manifold applications globally. The accelerating development of AI concentrates power and wealth on a global scale, with geopolitical and geo-economic implications.
“Moreover, no one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution. Nor are decision-makers held accountable for developing, deploying or using systems they do not understand. Meanwhile, negative spillovers and downstream impacts resulting from such decisions are also likely to be global.”
It added that although national governments and regional organisations will be crucial to controlling the use of AI, “the very nature of the technology itself – transboundary in structure and application – necessitates a global approach”.
#public #expresses #strong #support #regulation