Press "Enter" to skip to content

Uber still dragging its feet on algorithmic transparency, Dutch court finds

Uber has been found to have failed to comply with European Union algorithmic transparency requirements in a legal challenge brought by two drivers whose accounts were terminated by the ride-hailing giant, including with the use of automated account flags.

Uber also failed to convince the court to cap daily fines of €4k being imposed for ongoing non-compliance — which now exceed over half a million euros (€584,000).

The Amsterdam District Court found in favor of two of the drivers who are litigating over data access over what they couch as ‘robo-firings’. But the appeals court decided Uber had provided sufficient information to a third driver regarding the reasons why its algorithm flagged the account for potential fraud.

The drivers are suing Uber to obtain information they argue they are legally required to regarding significant automated decisions taken about them.

The European Union’s General Data Protection Regulation (GDPR) provides both for a right for individuals not to be subject to solely automated decisions with a legal or significant impact and to receive information about such algorithmic decision-making, including receiving “meaningful information” about the logic involved; its significance; and envisaged consequences of such processing for the data subject.

The nub of the issue relates not to fraud and/or risk reviews purportedly carried out on flagged driver accounts by (human) Uber staff — but to the automated account flags themselves which triggered these reviews.

Back in April an appeals court in the Netherlands also found largely in favor of platform workers litigating against Uber and another ride-hailing platform, Ola, over data access rights related to alleged robo-firing — ruling the platforms cannot rely on trade secrets exemptions to deny drivers access to data about these sorts of AI-powered decisions.

Per the latest ruling, Uber sought to rehash a commercial secrets argument to argue against disclosing more data to drivers about the reasons why its AIs flagged their accounts. It also generally argues that its anti-fraud systems would not function if full details were provided to drivers about how they work.

In the case of two of the drivers who prevailed against Uber’s arguments the company was found not to have provided any information at all about the “exclusively” automated flags that triggered account reviews. Hence the finding of an ongoing breach of EU algorithmic transparency rules.

The judge further speculated Uber may be “deliberately” trying to withhold certain information because it does not want to give an insight into its business and revenue model.

In the case of the other driver, for whom the Court found — conversely — that Uber had provided “clear and, for the time being, sufficient information”, per the ruling, the company explained that the decision-making process which triggered the flag began with an automated rule that looked at (i) the number of cancelled rides for which this driver received a cancellation fee; (ii) the number of rides performed; and (iii) the ratio of the driver’s number of cancelled and performed rides in a given period.

“It was further explained that because [this driver] performed a disproportionate number of rides within a short period of time for which he received a cancellation fee the automated rule signalled potential cancellation fee fraud,” the court also wrote in the ruling [which is translated into English using machine translation]. 

The driver had sought more information from Uber, arguing the data it provided was still unclear or too brief and was not meaningful because he does not know where the line sits for Uber to label a driver as a fraudster.

However, in this case, the interim relief judge agreed with Uber that the ride-hailing giant did not have to provide this additional information because that would make “fraud with impunity to just below that ratio childishly easy”, as Uber put it.

The wider question of whether Uber was right to classify this driver (or the other two) as a fraudster has not been assessed at this point in the litigation.

The long-running litigation in the Netherlands looks to be working towards establishing where the line might lie in terms of how much information platforms that deploy algorithmic management on workers must provide them with on request under EU data protection rules vs how much ‘blackboxing’ of their AIs they can claim is necessary to fuzz details so that anti-fraud systems can’t be gamed via driver reverse engineering.

Reached for a response to the ruling, an Uber spokesperson sent TechCrunch this statement:

The ruling related to three drivers who lost access to their accounts a number of years ago due to very specific circumstances. At the time when these drivers’ accounts were flagged, they were reviewed by our Trust and Safety Teams, who are specially trained to spot the types of behaviour that could potentially impact rider safety. The Court confirmed that the review process was carried out by our human teams, which is standard practice when our systems spot potentially fraudulent behaviour.

The drivers in the legal challenge are being supposed by the data access rights advocacy organization, Worker Info Exchange (WIE), and by the App Drivers & Couriers union.

In a statement, Anton Ekker of Ekker law which is representing the drivers, said: “Drivers have been fighting for their right to information on automated deactivations for several years now. The Amsterdam Court of Appeal confirmed this right in its principled judgment of 4 April 2023. It is highly objectionable that Uber has so far refused to comply with the Court’s order. However, it is my belief that the principle of transparency will ultimately prevail.”

In a statement commenting on the ruling, James Farrar, director of WIE, added: “Whether it is the UK Supreme Court for worker rights or the Netherlands Court of Appeal for data protection rights, Uber habitually flouts the law and defies the orders of even the most senior courts. Uber drivers and couriers are exhausted by years of merciless algorithmic exploitation at work and grinding litigation to achieve some semblance of justice while government and local regulators sit back and do nothing to enforce the rules. Instead, the UK government is busy dismantling the few protections workers do have against automated decision making in the Data Protection and Digital Information Bill currently before Parliament. Similarly, the proposed EU Platform Work Directive will be a pointless paper tiger unless governments get serious about enforcing the rules.”

source: TechCrunch