As governments look to regulate the online world, the scrutiny of the algorithms that sit behind popular websites and apps is only going to increase. With doubts over whether self-regulation can ever really work, and with many systems remaining opaque or hard to analyse, some experts are calling for a new approach – and one firm, Barcelona-based Eticas, is instead pioneering a method of adversarial audits.
The European Union’s (EU) Digital Services Act (DSA) is due in 2024 and will require any company providing digital services to conduct independent audits and risk assessments to ensure the safety and fundamental rights of users are respected in their environments. In anticipation of this, Eticas has conducted several external, adversarial audits of tech companies’ algorithms.
The audits conducted by Eticas thus far include examinations of how the algorithms of YouTube and TikTok influence the portrayal of migrants, and how the artificial intelligence (AI) algorithms used by ride-hailing apps in Spain (namely Uber, Cabify and Bolt) affects users, workers and competitors.
Iliyana Nalbantova, an adversarial audits researcher at Eticas, told Computer Weekly that “adversarial auditing” is essentially the practice of evaluating algorithms or AI systems that have little potential for transparent oversight, or are otherwise “out-of-reach” in some way.
While Eticas is usually an advocate for internal socio-technical auditing, where organisations conduct their own end-to-end audits that consider both the social and technical aspects to fully understand the impacts of a given system, Nalbantova said that developers themselves are often not willing to carry out such audits, as there are currently no requirements to do so.
“Adversarial algorithmic auditing fills this gap and allows to achieve some level of AI transparency and accountability that is not normally attainable in those systems,” she said.
“The focus is very much on uncovering harm. That can be harm to society as a whole, or harm to a specific community, but the idea with our approach is to empower those communities [negatively impacted by algorithms] to uncover those harmful effects and find ways to mitigate them.”
Nalbantova added while you can never “achieve a full comprehensive assessment of a system” with adversarial auditing due to the impossibility of accessing every aspect of a system like an internal audit would, the value of this approach lays in its ability to help understand the social impacts of systems, and how they are affecting people in practice.
“It is a valuable exercise on its own because it allows you see what can be done by the company itself if they decide to audit on their own,” she said. “What it really does is it raises flags, so maybe we don’t have all of the information necessary, but we have enough…to raise concerns and invite action.”
Audit findings and responses
Looking at the audits conducted so far, Eticas claimed that YouTube’s algorithm reinforces a dehumanising, stereotypical view of migrants (which it said are usually depicted as large groups of non-white people with their faces occluded, in contrast to “refugees” who it said are more often depicted as small groups of white people with clearly visible faces); while TikTok’s algorithm deprioritises any content containing political discourse on migration in favour of content with a clear focus on “entertainment”.
The accompanying report on the audit noted this “lead to the conclusion that TikTok’s algorithm does not actively shape the substance of political discourse on migration, but it appears to regulate its overall visibility via its recommender system and personalisation mechanism”.
In its ride-hailing audit, Eticas said it found a general lack of transparency in all three firms use of algorithms in payment and profiling of workers (raising concerns about labour law compliance) and noted that their pricing algorithms appear to collude in some vital routes through major cities, which in turn suggests “indirect price-fixing by algorithmic means”.
It also found that Uber’s algorithm could potentially discriminate based on a neighbourhood’s socio-economic characteristics, thus reducing the availability of service in low-income areas in a way that may constitute a breach of Spain’s General Consumer and User Protection Act.
Commenting on the adversarial audit, a YouTube spokesperson said: “While viewers may encounter debate around issues like immigration policy on YouTube, hate speech is not allowed on the platform. Our hate speech policy, which we rigorously enforce, specifically prohibits content that promotes violence or hatred against individuals or groups based on attributes like their immigration status, nationality, or ethnicity.”
Cabify also challenged the outcome of Eticas’ audit: “Cabify sets its rates in the market independently from other operators, following its own pricing policy and its own algorithm, available to all on its website. In this sense, Cabify reiterates that prices have never been set together with any other technological firm, as already accredited by the CNMC in 2020.
“Cabify can assure that its operation does not violate in any case the law of defense of competition, thus denying the claim that, together with other companies in the sector, have been fixing directly or indirectly commercial or service conditions.”
Cabify added that, in relation to concerns raised by Eticas about the platform’s compliance with labour rights in Spain, working conditions of drivers are set by companies holding the operating licences: “Cabify requires its collaborating fleets to comply exhaustively with the applicable regulations, even foreseeing it as a cause for termination of the contracts,” it said.
Computer Weekly also contacted TikTok, Uber, and Bolt about the audits, but the firms did not respond.
The adversarial auditing process
Nalbantova noted that while each audit necessarily differed depending on the context of the system in question and the issue being investigated, as well as the level of information available to Eticas as an external third party, the underlying approach is still to consider algorithms and AI as socio-technical systems.
“We come from the awareness that any kind of algorithms, any kind of AI systems, use data that is informed by what’s going on in society, and then the outputs of those algorithmic processes affect society in turn, so it’s a two-way communication and interaction there,” said Nalbantova.
“That’s why any adversarial audit should incorporate both social and technical elements, and then how that technical element might look like very much depends on the system that is being audited and on the approach the auditors have decided to take in this particular case.”
Despite the necessary variance in the details of individual audits, Eticas has been working to systemise an adversarial auditing methodology that others can use as a repeatable framework to begin investigating the social impacts of any given algorithm. Nalbantova said while the creation of this methodology is “an iterative and agile process”, Eticas has been able to identify common steps that each adversarial audit should take to achieve a high level of rigour, consistency, and transparency.
“The first step is obviously choosing the system and making sure that it is a system with impact, and a system that you can access in some way,” she said, adding that such “access points” could include affected communities to interview, a web or app-based systems’ public-facing interface, or open source code databases (although this is very rare).
From here, auditors should begin a “contextual analysis” to begin building an understanding of the system and how it interacts with the legal, social, cultural, political and economic environment in which it operates, which helps them form an initial hypothesis of what is going on under the hood. This contextual analysis should also be continuously iterated on as the audit progresses.
Eticas then approaches the organisations developing and deploying the systems directly, so they also have a chance to be involved in the process but prioritises engagement and “alliance building” with affected people and communities.
“A step that we insist on in our methodology is the involvement of affected communities. So, in some instances, affected communities have come to us with a problem that maybe they’re not sure how to examine,” she said. “For example, with our audit of ride-hailing apps, it was an organic partnership with two organisations, the Taxi Project and Observatorio TAS, who are advocating for workers’ rights in the taxi sector.”
All this also entails a “feasibility assessment” of the audit and whether it can realistically go forward, as if there are no access points identified, or auditors cannot legally get hold of the necessary data, then it may not even be possible.
Once auditors have identified a system, done a contextual analysis, approached a variety of stakeholders, and assessed the overall feasibility of the audit, Nalbantova said the final stage is to design a methodology for the audit that covers data collection and analysis, which ends with considering possible mitigations and recommendations for any harmful effects identified.
“This process is not without challenges, and it requires a lot of creativity, a lot of thinking outside the box, but we’ve found that those steps more or less address most of the issues that come up during the planning and the execution of an adversarial audit, and can be adapted to different systems,” she said.
Keeping an open mind
In its report on the TikTok audit, Eticas noted while the firm’s algorithm did not pick up on user political interests for personalisation as quickly as initially expected (instead choosing to prioritise “entertainment” content regardless of a user’s political views), investigations by the Wall Street Journal and NewsGuard from 2021 and 2022 respectively found the complete opposite.
Those investigations “both found evidence that TikTok’s algorithm picks up implicit user [political] interests shortly after account creation and curates highly personalised recommendation feeds quickly [within 40 minutes to two hours],” it said.
“With this, the results of our audit and other recent studies seem to suggest that the level of personalisation in TikTok’s recommender system has been adjusted in the past year.”
Nalbantova added that while the results were unexpected, they illustrate that algorithms do evolve over time and the need to continuously re-assess their impacts.
“Sometimes they are very dynamic and change really quickly…this is why it is so important for any auditing process to be really transparent and public so that it can be replicated by others, and it can be tested more and more,” she said.
“We don’t have a specific timeframe in which adversarial audits should be repeated, but for internal audits, for example, we recommend at least once a year or ideally twice a year, so a similar timeframe could be used.”
She added for social media algorithms, which “change all the time”, the audits should be even more regular.
However, Patricia Vázquez Pérez, the head of marketing, PR and comms at Eticas, noted the response from companies to their audits have been lacking.
In response to the ride-hailing audit, for example, she noted that Cabify had a “strong response” and attempted to discredit the rigour of the report and question its findings.
“Usually before we do an audit, we get in contact with that company, trying to expose the initial hypotheses of what we think might be happening, and most of the time we get silence,” she said.
“Sometimes after the report and the audits are published, we get negative answers from the companies. They’ve never been open to say, ‘Okay, now that you’ve published this, we are open to showing you our code for an internal audit’ – they never wanted that.”
Nalbantova said that Eticas’ adversarial audits shows that companies are only committed to transparency in theory: “Companies are only saying it in principle and not doing anything in practice.”
She added, however, that Eticas will still strive to provide possible mitigation measures for issues identified by audits, even where companies respond negatively to the results of an audit.
Computer Weekly contacted Cabify about its reaction to Eticas’ audit, and whether it would work alongside external auditors in the future: “Cabify reiterates its commitment to both consumers and institutions to offer a transparent, fair, and quality service that favours sustainable, accessible mobility and improves life in cities. The company has cooperated and will continue cooperating with public administrations and authorities, being at their complete disposal for any consultation or request for information.”
All the other firms audited were also asked about whether they would work alongside Eticas or other external auditors in the future, but none responded on that point.
Eticas is currently working to develop a guide for adversarial auditing that details its methodology, and which it plans to publish in the coming months.
Nalbantova said it would contain information on all the steps necessary to conduct an adversarial audit, what methods to use (as well as how and when), details on the strengths and limitations of the adversarial auditing approach. This would all done with the idea being to help mainstream the practice while maintaining high levels of rigour and transparency throughout the process.
“With this guide, what we’re trying to do is empower social science researchers, journalists, civil society organisations data scientists, users, members of affected communities especially, to become auditors,” she said. “We think that it doesn’t matter who is actually doing the audit as much as the methodology they follow.”
Comentarios recientes