The risks AI potentially poses to individuals and society have been subject to fierce debate. Many responses focus on the ethical implications of AI, emphasising matters such as transparency, privacy, and bias. This is true even when such responses are explicitly “based” on human rights provisions (for example the European Commission’s ethics guidelines).
Many businesses have taken visible steps to tackle ethical issues arising from the AI systems they develop, for example by adopting a code of AI ethics. However, a less-explored, but nonetheless fundamental focus for businesses is the relationship between AI and human rights, as well as the responsibilities of businesses in this respect.
A growing number of scholars, practitioners, lawyers, and policymakers support a human rights approach to AI (see e.g. Slimmer AI’s White Paper on human rights and the responsibility of AI businesses). But why is this important, and what does it mean for businesses developing AI?
This introductory article, the first in a series on ‘Mapping Corporate Responsibility for AI and Human Rights’ considers the question:
“Why should we approach human-centric AI from a human rights (law) perspective, rather than only an ‘ethics’ perspective?”
Examples of the rocky relationship between AI and human rights abound. Different types of AI deployed in a wide variety of circumstances impact many human rights, including privacy, freedom of expression, non-discrimination, health care, and social security.
The dangers of AI to human rights are being increasingly highlighted, whether it be a risk to the right to non-discrimination through the use of facial recognition or the right to a fair trial through the use of algorithmic decision-making. Increasingly, responses show us that international human rights law is an appropriate framework for tackling these issues.
However, the tendency to address the dangers of AI using ethics is not without cause. Competitive pressures — and the need to find a balanced regulatory/legal framework to protect individuals without hindering innovation — play a role here.
Viewed in this light, more general ethics guidelines may be more palatable than the more concrete human rights standards found in international human rights law. Even though similar arguments have been leveled at human rights, concepts within ethics are often viewed as being vaguer, less uniformly accepted, and more open to ‘variable interpretations and levels of protection. While this may favour a human rights approach, the apparent degree of flexibility within ethics could also make it a more attractive focal point for businesses. Nevertheless, international law has much to offer in the context of AI.
The framework of international human rights law provides an internationally agreed-upon set of standards and obligations. These have been interpreted and applied in a wide range of contexts by monitoring and adjudicatory bodies including, for example, the European Court of Human Rights. For instance, it provides much-needed standards on the balancing of competing interests; when this is allowed, to what extent, and under what circumstances.
This could concern, for example, conflicts that often occur in relation to AI between the interests of more than one individual/entity (e.g. privacy vs. freedom of expression), or between an individual’s human rights and the public interest (e.g. privacy and the prevention of disorder or crime as in the case of Slimmer AI spin-out Sentinels).
Even though international human rights law is still evolving in the field of business, there is a solid basis provided by the United Nations Guiding Principles on Business and Human Rights (UNGPs). These principles are not legally binding, but were endorsed by the United Nations Human Rights Council in 2011 and lay down the existing framework for corporate human rights responsibility.
The UNGPs are very general and apply to all businesses, so further articulation of how they apply to companies developing AI is required. Nonetheless, the UNGPs provide concrete standards to follow and have received considerable support from the business community as well as law and policymakers. Ultimately, the UNGPs clarity as to what steps businesses should take to mitigate the adverse human rights caused or contributed to by their AI.
International law also brings ongoing legal advancements that bring further clarity to the responsibilities of both states and businesses regarding the development of AI to ensure respect for human rights.
With this in mind, it is advantageous for companies to stay ahead of the game and already implement measures to ensure that their AI does not harm others’ rights (and their own interests) in the future.
Human rights law contains the right of access to remedy. This is crucial to people who have suffered from the negative effects of AI and is key to accountability. Access to remedy is found in binding human rights instruments (see e.g. Guide on Article 13 of the European Convention on Human Rights) and is a focal point of the UNGPs. Additionally, it provides enforcement mechanisms through which individuals can directly claim violations of their rights, including, in some circumstances, harm caused by AI. This can have a significant impact on algorithmic accountability (a core issue of AI ethics), for which human rights is increasingly argued to be an appropriate framework.
All of this is not to say that AI ethics should be ignored, or efforts based on ethics abandoned in the context of AI. Rather, human rights should be introduced and prioritised alongside AI ethics to allow complementarity between the two to contribute to the highest possible standard of protection for individuals and society.
Follow Dr. Lottie Lane on LinkedIn and Twitter.