Equally Ours blog: Equality and human rights in the era of Artificial Intelligence 

Equally Ours’ Lukia Nomikos on the equality and human rights implications of Artificial Intelligence (AI) and the importance of adopting a rights-based approach to the development of an AI regulatory framework.   

Artificial Intelligence (AI) technology is evolving at a previously unimaginable pace, in terms of infrastructure speed, availability and scale – ChatGPT didn’t even exist a year ago and it now has nearly 200 million users. AI is becoming more powerful and significantly cheaper by the month. Conversations surrounding AI’s potential impacts have naturally also surged. These tend to reflect a mixture of excessive zeal and excessive dread – will AI cure cancer or destroy us all? 

While AI will bring many benefits, a greater focus on its equality and human rights implications is urgently needed. AI advancement may lead to discrimination and deepen inequalities and poses a real threat to our rights and democracy. 

Given the rapid speed of AI advancement and the associated risks, the current policy context in the UK – particularly around the attacks on the Human Rights Act and the European Convention on Human Rights (ECHR) – and the very recent 2023 AI Safety Summit, it is vital that we all urgently develop our understanding of the opportunities and challenges AI advancement brings. 

For these reasons, as well as the fact that AI is inextricably linked to many areas of Equally Ours’ work, our Human Rights and Equality Strategy group meeting in October focused on the equality and human rights implications of AI advancement and explored how we, as social justice organisations, can advocate for a rights-based approach to the development of an AI regulatory framework. 

We were joined by experts from the Equality and Human Rights Commission, Foxglove and Luminate who all highlighted some of the greatest threats AI poses to equality, human rights and democracy, and shared their views on how we can begin to address these risks. This blog is based on their illuminating insights, examples and suggested solutions.  

Algorithmic bias and discrimination  

AI technology threatens human rights and equality in a number of different ways. One of these is the bias contained in algorithmic systems and the resulting discrimination.  

AI is not as neutral and rational as people often think. All AI systems are designed, developed and deployed by humans and as humans we all hold certain beliefs and preconceptions about different groups of people, whether in the form of unconscious bias or conscious prejudice. It is therefore almost inevitable that the people who build and manage these systems carry their human flaws into them.  

This bias can creep into these technologies at all stages of the AI lifecycle. AI systems learn to make decisions based on the data they’re trained on, which can contain errors, inaccuracies and gaps based on human biases, or reflect historical and social inequalities. In other words, these systems are only as good as the data that’s fed to them.  

The same problems apply in the design of AI systems – the algorithms used to process the data and produce outputs may reflect the assumptions and preferences of the people who built them.  

How the technology is used matters as well. If AI systems aren’t deployed in a transparent way – or if we don’t even know that they’re being used – it can be very difficult to identify the biases in the decisions that the systems make.  

There is no shortage of examples of algorithmic bias and discrimination. A few years ago, Amazon built a system to scan CVs and applications for software engineer jobs. The system taught itself to reject applications from women, because all previously successful applicants had been men.  

In another widely known example, the UK government attempted to use an algorithm to calculate A-level grades, since students were unable to sit exams due to the Covid-19 pandemic. Almost 40% of students received grades lower than they had anticipated, often because of the poor historical record of their school rather than their own performance. Public outcry followed and the government ended up retracting the grades.  

As is evident from these examples, if we use a biased system to make decisions, the decisions taken will inevitably also be biased. This results in new inequalities in the real world, which will then be fed back into the system to enable it to continue making new decisions. Because of this problematic feedback loop as well as the sheer scale of the application of AI systems, AI has the potential to not only perpetuate but also amplify existing stereotypes and biases.  

AI therefore disproportionately impacts the human rights of already vulnerable and marginalised individuals and communities. And in doing so, it deepens existing inequalities and creates a new form of injustice, rooted in technology. 

Information integrity, elections and threats to democracy  

AI also has the potential to erode democracy. AI tools have the ability to generate hyper-realistic texts, images, audios and videos which can be used to manipulate public opinion and automatically censor critical online content. In this way, generative AI is boosting the spread of disinformation and propaganda, thus threatening democratic debate. 

At least 65 national-level elections are taking place in more than 50 countries in 2024, and in many of these countries, democracy is already under threat. We’re likely to witness the greatest level of election interference we’ve ever seen, including large-scale disinformation campaigns that pollute the information ecosystems and exploit voters’ political, racial and religious identities, as well as the spreading of misleading voting information and voter suppression campaigns that target minorities, in particular.  

Given these threats to elections and democracy, preventing the spread of disinformation, ensuring effective content moderation and safeguarding the rights of people to vote, should be a key priority for governments and companies across the world.  

Invisible labour, exploitation and the power imbalance 

While many of us think of the ultrarich, mostly white, “tech bros” of Silicon Valley when we imagine the people behind AI, the reality is very different. We cannot talk about the equality and human rights implications of AI without also addressing the abuse and exploitation rife within the industry itself. 

The AI machine is powered by a massive invisible labour force: there are millions of underpaid workers around the world, but mostly in the global South, who are performing repetitive tasks under precarious and sometimes dangerous labour conditions. Compared to the six-figure salaries of Silicon Valley, many of these workers are surviving on as little as $2 an hour.  

Chat GPT is an example of this – thousands of low-wage workers in the global South were subcontracted to extract the hateful, toxic content from the data that ChatGPT was trained on. These individuals were paid very little, exposed to incredibly traumatic content, and offered no mental health support. 

This power imbalance – the concentration of resources and power in just a few political and corporate hands on the one hand and a massive, exploited workforce on the other – is a major threat when it comes to AI, technology and data. AI is currently being used mostly for the purposes of surveillance, control and profit rather than the public benefit. States and corporations are using technology that is both dangerous and extremely unfair, and which infringes on a number of rights, including the rights to privacy and freedom of expression. 

AI advancement should be something that benefits everyone and advances the common good – the health and wellbeing of people and the planet. Shifting the power imbalance is vital if we are to achieve this. This includes ensuring that the industry is representative, safe and just. Challenging the big players through labour law and competition law, and advocating for the rights of the workers, is a good starting point. 

Protecting and promoting human rights in the age of AI  

Clearly, if the risks above go unchecked, AI will lead to discrimination, exacerbate existing structural inequalities, reinforce power imbalances and disadvantage the most marginalised in society.  

However, responsible and ethical use of AI can bring many benefits. If properly managed, AI advancement holds enormous potential for human rights, human development, and the common good. It can be used to combat disease, remove disabling barriers, and help to tackle climate change – to name just a few areas of life in which AI can support improvements.  

OECD, UNESCO and the Council of Europe are among the international bodies that recognise the risk AI poses to our rights and freedoms, and therefore have human rights and non-discrimination central to their recommendations on AI.  

However, it’s a different story in the UK. Government proposals to regulate AI currently fall short of what’s needed to protect people from these risks. In early 2023, the government published a white paper setting out its intended approach to regulating AI. It looks to only use the existing regulatory and legal frameworks, and regulators, to oversee AI. While ‘fairness’ is listed as one of the five principles which will need to be considered when regulating AI, human rights and equality are only considered under this principle rather than being a common thread throughout – making human rights and equality far less central to this approach as the ones taken by the international bodies. 

To properly address the risks of AI, careful oversight is needed – a robust regulatory framework, appropriate legislation, transparency, access to effective remedies, and the inclusion of civil society and human rights experts in the AI dialogue, are all key to ensuring responsible and ethical use of AI. While it’s important to note that none of these measures will make the technology immune to bias or from being misused, they can go a long way in helping to mitigate its risks.  

Our existing equality and human rights frameworks are an incredibly important part of this – they are well-established, universal and apply to AI just as much as in any other area of life. The Human Rights Act and the public sector equality duty are particularly strong tools we can use in this area.  

Equally Ours advocates for a rights-based approach to AI advancement, one that has equality and human rights as a central consideration across all stages of the AI lifecycle. While we don’t have a strand of work that focuses on AI specifically, it is closely linked to many areas of our work, as mentioned at the start. Protecting and advancing equality and human rights, and addressing structural and systemic inequalities, are the bedrock of Equally Ours’ activity. The risks that AI poses to these highlight the need for us to continue this work.  

AI advancement is also an incredibly good example of why it matters that our human rights standards are capable of evolving rather than being fixed in time. Many of the UK government’s attacks on the Human Rights Act and the ECHR have been around their status as ‘living instruments’ and what it views as the Strasbourg court’s attempt to expand human rights law beyond the rights set out in the Convention. The living instrument doctrine allows courts to interpret human rights law in a way that matches the modern day. In an incredibly rapidly evolving world – particularly when it comes to AI and technological developments – it is vital that the law ensures protection for the human rights we recognise now, rather than those accepted in the 1950s. AI advancement is therefore a useful case study that illustrates the importance of the Human Rights Act and the ECHR.  

AI is likely to pervade almost every domain of human activity, which is why we urge civil society to be part of the conversations surrounding AI, challenge its harmful impacts on human rights and equality, and call for an informed, transparent, rights-based approach to the development of an AI regulatory framework. Governments and tech corporations aren’t going to take steps to ensure safe, responsible and ethical AI advancement on their own initiative – we must demand it.  

Share this article

Posted in:

Tagged:

Related posts