Skip to main content

Artificial intelligence (AI) and algorithms are being used to monitor and control workers with little accountability or transparency, and the practice needs to be controlled by new legislation, according to a parliamentary inquiry into AI-powered workplace surveillance.

To deal with the “magnitude and pervasive use of AI at work”, MPs and peers belonging to the All-Party Parliamentary Group (APPG) for the Future of Work have called for the creation of an Accountability for Algorithms Act (AAA).

“The AAA offers an overarching, principles-driven framework for governing and regulating AI in response to the fast-changing developments in workplace technology we have explored throughout our inquiry,” said the APPG in its report The new frontier: artificial intelligence at work, published this week.

“It incorporates updates to our existing regimes for regulation, unites them and fills their gaps, while enabling additional sector-based rules to be developed over time. The AAA would establish: a clear direction to ensure AI puts people first, governance mechanisms to reaffirm human agency, and drive excellence in innovation to meet the most pressing needs faced by working people across the country.”

The cross-party group of MPs and peers conducted their inquiry between May and July 2021 in response to growing public concern about AI and surveillance in the workplace, which they said had become more pronounced with the onset of the Covid-19 pandemic and the shift to remote working.

“AI offers invaluable opportunities to create new work and improve the quality of work if it is designed and deployed with this as an objective,” said the report. “However, we find that this potential is not currently being materialised.

“Instead, a growing body of evidence points to significant negative impacts on the conditions and quality of work across the country. Pervasive monitoring and target-setting technologies, in particular, are associated with pronounced negative impacts on mental and physical wellbeing as workers experience the extreme pressure of constant, real-time micro-management and automated assessment.”

The report added that a core source of workers’ anxiety around AI-powered monitoring is a “pronounced sense of unfairness and lack of agency” around the automated decisions made about them.

“Workers do not understand how personal, and potentially sensitive, information is used to make decisions about the work that they do, and there is a marked absence of available routes to challenge or seek redress,” it said. “Low levels of trust in the ability of AI technologies to make or support decisions about work and workers follow from this.”

The report added that there are even lower levels of confidence in the ability to hold developers and users of algorithmic systems accountable for how they are using the technology.

David Davis MP, Conservative chair of the APPG, said: “Our inquiry reveals how AI technologies have spread beyond the gig economy to control what, who and how work is done. It is clear that, if not properly regulated, algorithmic systems can have harmful effects on health and prosperity.”

Labour MP Clive Lewis added: “Our report shows why and how government must bring forward robust proposals for AI regulation. There are marked gaps in regulation at an individual and corporate level that are damaging people and communities right across the country.”

As part of the AAA, the APPG recommended establishing a duty for both public and private organisations to undertake, disclose and act on pre-emptive algorithmic impact assessments (AIA), which would need to apply from the earliest stages of a system’s design and be conducted throughout its lifespan.

It said workers should also be given the right to be directly involved in the design and use of algorithmic decision-making systems.

In March 2021, on the basis of a report produced by employment rights lawyers, the Trades Union Congress (TUC) warned that huge gaps in UK law around the use of AI at work will lead to discrimination and unfair treatment of working people, and called for urgent legislative changes.

TUC general secretary Frances O’Grady said: “It’s great to see MPs recognise the important role trade unions can play in making sure workers benefit from advances in technology. There are some much-needed recommendations in this report – including the right for workers to disconnect and the right for workers to access clear information about how AI is making decisions about them.”

O’Grady also welcomed the APPG’s suggestion that the government should provide funding for the TUC’s technology taskforce, as well as union-led AI training for workers more generally.

In response to the APPG’s publication, Andrew Pakes, research director at Prospect Union, who also gave evidence to the inquiry, said the UK’s laws have not kept pace with the acceleration of AI at work.

“There are real risks of discrimination and other flawed decisions caused by the misapplication of AI in processes such as recruitment and promotion – and we could be left with a situation where workers lose out but have no recourse to challenge the decision,” said Pakes.

“Instead of looking to weaken our protections by removing the legal requirement for human oversight of AI decisions at work, government should be listening to this report and refreshing our rights so they are fit for the age of AI.”

In June 2021, the government’s Taskforce on Innovation, Growth and Regulatory Reform (TIGRR) recommended scrapping safeguards against automated decision-making contained within Article 22 of the General Data Protection Regulation (GDPR), in particular the need for human reviews of algorithmic decisions.

Leave a Reply