Skip to content
Home Uber Eats delivery battle against AI bias shows how hard-won justice under UK law is

Uber Eats delivery battle against AI bias shows how hard-won justice under UK law is

  • by

On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is black, received a payment from Uber after he was unable to access the app due to a “racist” facial recognition check. The program has been using the app to pick up food for delivery on the Uber platform since November 2019.

The news raises questions about whether UK law is appropriate to deal with the increasing use of artificial intelligence systems. In particular, the lack of transparency in automated systems flooding the market with promises of improved user safety and/or service efficiency may pose the risk of blitz-scaled individual harm, even as remediation for those affected by AI-driven bias may be required It takes a long time. Year.

Since Uber implemented its real-time identity check system in the UK in April 2020, it has received a series of complaints about failed facial recognition checks. Uber’s facial recognition system is based on Microsoft’s facial recognition technology and requires account holders to submit live selfies that are checked. Verify identity against archived photos.

Identity check failed

According to Manjang’s complaint, Uber suspended and terminated his account after a failed identity check and subsequent automated process, claiming it found a “persistent mismatch” in the mugshot he took to access the platform. Manjang launched a legal claim against Uber in October 2021, which was supported by the Equality and Human Rights Commission (EHRC) and the App Drivers and Courier Union (ADCU).

Years of litigation followed, with Uber failing to dismiss Manjang’s claims or order a bond to continue the case. This strategy appears to have contributed to the delay in the litigation, which the EHRC said was still at a “preliminary stage” in autumn 2023, noting that it showed the “complexity of claims involving AI technology”. A final hearing is scheduled for November 2024 and will last 17 days.

The hearing will now not be held after Uber offered (and Manjang accepted) a settlement, meaning finer details about what exactly went wrong and why won’t be made public. Terms of the financial settlement have also not been disclosed. Uber didn’t provide details or comment on what exactly went wrong when we asked.

We also reached out to Microsoft for a response on the outcome of the case, but the company declined to comment.

Despite its settlement with Manjang, Uber has not publicly acknowledged problems with its systems or processes. The company’s statement on the settlement denied that Express accounts could be terminated based solely on AI evaluations, as it claimed that facial recognition checks were backed up by “robust human review.”

“Our real-time identity checks are designed to help keep everyone who uses our apps safe and include robust human review to ensure we’re not operating in a vacuum without oversight,” the company said in a statement. Making decisions about someone’s livelihood.” “Automated facial verification is not the reason Mr. Manjang was temporarily unable to access his Express account.”

But apparently, in Manjang’s case, there was a problem with Uber’s identity check.

Worker Info Exchange (WIE), a digital rights advocacy group for platform workers that also backed Manjang’s complaint, managed to obtain all of his selfies from Uber through a subject access request under UK data protection law and was able to prove that all The photo he submitted to the facial recognition check was indeed a photo of himself.

“After being fired, Pa sent numerous messages to Uber to correct the problem, specifically asking for someone to review his submission. Each time Pa was told ‘we cannot confirm that the photo provided is indeed of you and due to the continued mismatch, we It has finally been decided to terminate our partnership with you,'” WIE recalled when discussing his case. The wider report looks at “data-driven exploitation in the gig economy”.

Based on the details of Manjang’s public complaint, it’s clear that Uber’s facial recognition checks and The human review system it set up as a safety net for automated decision-making failed in this case.

Equality Act Plus Data Protection

The case raises questions about the appropriateness of UK law in governing the use of artificial intelligence.

Manjang was ultimately able to obtain a settlement from Uber through legal proceedings based on equality laws – specifically, a discrimination claim brought under the UK’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chair of the EHRC, wrote in a statement that she raised concerns over the fact that Uber Eats couriers had to launch legal claims “to understand the opaque processes that affected their jobs” criticized.

“AI is complex and presents unique challenges to employers, lawyers and regulators. It is important to understand that as the use of AI increases, the technology may lead to discrimination and human rights violations,” she wrote. “We are particularly concerned that Mr Manjang was not aware that his account was being deactivated and was not provided with any clear and effective means of challenging the technology. More needs to be done to ensure employers are transparent and transparent about when and how their employees use AI. open.”

UK Data Protection Act is another relevant legislation. In theory, it should provide strong protection against opaque AI processes.

The selfie data relevant to Manjang’s claims was obtained using data access rights contained in the UK GDPR. If he couldn’t get such clear evidence that Uber’s identity checks failed, the company might not choose to settle at all. Demonstrating flaws in proprietary systems without giving individuals access to relevant personal data would further increase the odds of better-resourced platforms.

enforcement gap

In addition to data access rights, other powers in the UK GDPR should provide individuals with additional safeguards. The law requires a lawful basis for processing personal data and encourages system deployers to proactively assess potential harm by conducting data protection impact assessments. This should force further examination of harmful AI systems.

However, these protections need to be enforced to be effective, including creating a deterrent against the rollout of biased AI.

In the UK case, the relevant law enforcement agency, the Information Commissioner’s Office (ICO), failed to intervene and investigate complaints against Uber, despite complaints of failed identity checks as early as 2021.

Jon Baines, senior data protection expert at law firm Mishcon de Reya, said the ICO’s “lack of appropriate enforcement” undermined legal protections for individuals.

“We should not assume that existing legal and regulatory frameworks cannot deal with some of the potential harms of AI systems,” he told TechCrunch. “In this case, it strikes me that … the Information Commissioner certainly has jurisdiction to consider individual cases and, more broadly, whether the processing carried out under the UK GDPR is lawful.

“Things like – is the processing fair? Is there a lawful basis for it? Are there Article 9 conditions (to take into account that special categories of personal data are being processed)? But equally importantly, is there a robust process before implementing the verification application? Data protection impact assessment?”

“So, yes, ICOs should definitely be more proactive,” he added, questioning the lack of intervention from regulators.

We contacted the ICO in relation to Manjang’s case, asking it to confirm whether it was investigating Uber’s use of artificial intelligence for identity checks in light of the complaint. A spokesman for the regulator did not answer our questions directly but issued a general statement stressing that organizations need to “know how to use biometrics in a way that does not interfere with people’s rights.”

Its statement also said: “Our latest biometric guidance makes clear that organizations must mitigate the risks that come with using biometric data, such as errors in accurately identifying people and bias within the system,” adding: “If anyone has concerns about their How the data is being used and processed and they can report those issues to the ICO.”

At the same time, the government is diluting data protection laws through a post-Brexit data reform bill.

Additionally, the government also confirmed earlier this year that it would not be introducing dedicated AI safety legislation at this time, despite Chancellor Rishi Sunak’s dramatic claims that AI safety is a priority area for his government.

Instead, it confirmed a proposal put forward in its March 2023 AI white paper, in which it intended to rely on existing legal and regulatory authorities to expand oversight activities to cover possible AI risks in its patches. One tweak to its approach announced in February is to provide regulators with a small amount of additional funding (£10m), which the government proposes will be used to study AI risks and develop tools to help them examine AI systems.

No timetable was provided for disbursement of this small additional funding. Multiple regulators fall within this framework, so if cash is allocated between the likes of the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency (to name just three of the 13 regulators and departments the UK Secretary of State writes about) By the time they were asked to publish an update to their “strategic approach to AI” last month, they had each received less than £1m to supplement their budgets to deal with rapidly expanding AI risks.

Frankly, if AI safety is in fact a government priority, the level of additional resources for already overstretched regulators seems incredibly low. It also means that, as critics of the government’s approach have previously pointed out, there remains zero cash or active oversight of the harms of AI in the cracks of the UK’s existing regulatory system.

The new AI safety law is likely to send a stronger priority signal – similar to the EU’s risk-based AI harms framework, which is accelerating its adoption into hard law across the EU. But there also needs to be the will to actually implement it. The signal must come from the top.

#Uber #Eats #delivery #battle #bias #shows #hardwon #justice #law


Discover more from Yawvirals Gurus' Zone

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Index

Discover more from Yawvirals Gurus' Zone

Subscribe now to keep reading and get access to the full archive.

Continue reading