play_arrow

keyboard_arrow_right

Listeners:

Top listeners:

skip_previous skip_next
00:00 00:00
playlist_play chevron_left
volume_up
  • play_arrow

    Omanyano ovanhu koikundaneki yomalungula kashili paveta, Commisiner Sakaria takunghilile Veronika Haulenga

Science & Technology

AI bias: the organised struggle against automated discrimination

todayMarch 12, 2024 39

Background
share close
varuna/Shutterstock

 

By Philip Di Salvo, University of St.Gallen and Antje Scharenberg, University of St.Gallen

 

In public administrations across Europe, artificial intelligence (AI) and automated decision making (ADM) systems are already being used extensively.

These systems, often built on opaque “black box” algorithms, recognise our faces in public, organise unemployment programmes, and even forecast exam grades. Their task is to predict human behaviour and to make decisions, even in sensitive areas such as welfare, health and social services.

As seen in the USA, where algorithmic policing has been readily adopted, these decisions are inherently influenced by underlying biases and errors. This can have disastrous consequences: in Michigan in June 2020 a black man was arrested, interrogated and detained overnight for a crime he did not commit. He had been mistakenly identified by an AI system.

These systems are trained on pre-existing human-made data, which is flawed by its very nature. This means they can perpetuate existing forms of discrimination and bias, leading to what Virginia Eubanks has called the “automation of inequality”.

Holding AI responsible

The widespread adoption of these systems begs an urgent question: what would it take to hold an algorithm to account for its decisions?

This was tested recently in Canada, when courts ordered an airline to pay compensation to a customer who had acted on bad advice given by their AI-powered chatbot. The airline tried to rebut the claim by stating that the chatbot was “responsible for its own actions”.

In Europe, there has been an institutional move to regulate the use of AI, in the form of the recently passed Artificial Intelligence Act.

This Act aims to regulate large and powerful AI systems, preventing them from posing systemic threats while also protecting citizens from their potential misuse. The Act’s launch has been accompanied by a wide range of preceding direct actions, initiatives and campaigns launched by civil society organisations across EU member states.

This growing resistance to problematic AI systems has gained momentum and visibility in recent years. It has also influenced regulators’ choices in crucial ways, putting pressure on them to introduce measures that safeguard fundamental rights.

The Human Error Project

As part of The Human Error Project, based at Universität St. Gallen in Switzerland, we have studied the ways in which civil society actors are resisting the rise of automated discrimination in Europe. Our project focuses on AI errors, an umbrella term that encompasses bias, discrimination and un-accountability of algorithms and AI.

Our latest research report is entitled “Civil Society’s Struggle Against Algorithmic Injustice in Europe”. Based on interviews with activists and representatives of civil society organisations, it explores how European digital rights organisations make sense of AI errors, how they question the use of AI systems, and highlights the urgent need for these debates.

Our research revealed a panorama of concern, as most of the individuals we interviewed shared the now widely accepted view put forward by AI scholars: AI can often be racist, discriminatory and reductionist when it comes to making sense of human beings.

Many of our interviewees also pointed out that we should not consider AI errors as a purely technological issue. Rather, they are symptoms of wider systemic social issues that predate recent technological developments.

Predictive policing is a clear example of this. Because these systems are based on previous, potentially falsified or corrupted police data, they perpetuate existing forms of racialised discrimination, often leading to racial profiling and even unlawful arrests.

AI is already impacting your daily life

For European civil society actors, one key problem is a lack of awareness among the public that AI is being used to make decisions in numerous areas of their lives. Even when people are aware, it is often unclear how these systems operate, or who should be held responsible when they make an unfair decision.

This lack of visibility means the struggle for algorithmic justice is not only a political issue, but also a symbolic one: it calls our very ideas of objectivity and accuracy into question.

AI debates are notoriously dominated by media hype and panic, as our first research report showed. Consequently, European civil society organisations are forced to pursue two goals: speaking clearly about the issue, and challenging the view of AI as a panacea for social problems.

The importance of naming the problem is evident in our new report, where interviewees were hesitant to even use phrases like “AI Ethics,” or did not mention “AI” at all. Instead, they used alternative terms such as “advanced statistics,” “Automated Decision Making,” or “ADM systems”.

Reining in big tech

In addition to raising awareness among the general public, one of the main issues is curbing the dominant power of big tech. Several organisations we contacted have been involved in initiatives connected with the EU’s AI Act and have, in some cases, played a direct part in highlighting issues and closing loopholes that tech firms could exploit.

According to some organisations there are elements, such as biometric facial recognition in public spaces, where nothing short of an outright ban will suffice. Others even take a sceptical view of legislation as a whole, believing that regulation alone cannot solve all the issues presented by the continuing spread of algorithmic systems.

Our research shows that, in order to address the power of algorithmic systems, we have to stop seeing AI error as a technological issue, and start seeing it as a political one. What needs fixing is not a technological bug in the system, but the systemic inequalities that these systems perpetuate.The Conversation

Philip Di Salvo, Postdoctoral researcher and lecturer, University of St.Gallen and Antje Scharenberg, International Postdoctoral Fellow, University of St.Gallen

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Written by: Contributed

Rate it

0%