The Framework Convention on AI, Human Rights, Democracy and the Rule of Law, is the first ever legally binding international treaty on this topic. It stipulates that parties to the Convention shall adopt and maintain measures to ensure that:
- the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights; and
- AI systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including individuals’ fair access to and participation in public debate.
It also contains key principles applicable to the lifecycle of AI systems, namely:
- Human dignity and individual autonomy;
- Transparency and oversight;
- Accountability and responsibility;
- Equality and non-discrimination;
- Privacy and personal data protection;
- Reliability; and
- Safe innovation.
These principles all have a direct or indirect relation to the protection of human rights during the lifecycle of an AI system and should be included in appropriate measures adopted by the parties to the Convention.
The Blueprint for an AI Bill of Rights is not a legal document, but it is a framework for the development of policies and practices that protect human rights and democratic values in the deployment and governance of AI. The name is somewhat misleading since it does not suggest the development of an AI Bill of Rights, but it rather provides guidelines based on five key principles that should support policy and regulatory developments on AI to protect the public against harm. These principles are safe and effective systems, protection against algorithmic discrimination, data privacy, notice and explanation, human alternatives, consideration and fallback. It nevertheless states clearly that this framework must be applied to all automated systems that could potentially impact individuals’ or communities’ exercise of:
- Civil rights, civil liberties and privacy;
- Equal opportunities; or
- Access to critical resources or services, such as healthcare.
For a clear stipulation on how human rights should be protected in the development and deployment of AI, it is necessary to turn to the EU AI Act, the first comprehensive law on AI in the world. It has an extensive focus on the protection of human rights, democracy and the rule of law, which is in line with the Joint European Declaration on Digital Rights and Principles for the Digital Decade (2023). The seven non-binding ethical AI principles on which the EU AI Act is based, should guide the development and deployment of AI systems, namely:
- Human agency and oversight;
- Technical robustness and safety;
- Privacy and data governance;
- Transparency;
- Diversity, non-discrimination and fairness;
- Societal and environmental well-being; and
- Accountability.
While there is no doubt that the continuously expanding scope of AI development brings huge benefits to society that contribute to human well-being, the risks and potential harms related to AI are also well-known. It is due to these risks and in support of safe and trustworthy AI that clear rules that protect human rights are stipulated and given effect to. The fundamental approach of the EU AI Act is the protection of health and safety, fundamental human rights, including democracy, the rule of law and environmental protection. Specific human rights protection is provided by stipulating that a deployer of a high-risk AI system must do a fundamental rights impact assessment (FRIA) and a data protection impact assessment (DPIA) in accordance with the General Data Protection Regulation (GDPR), where applicable, prior to putting it into use. A fundamental rights impact assessment in accordance with Art. 27 of the EU AI Act must inter alia indicate:
- The categories of natural persons and groups likely to be affected by the use of the high-risk AI system in the specific context.
- The specific risks of harm likely to have an impact on the identified categories of individual natural persons or groups of persons.
- A description of the implementation of human oversight measures.
- A description of the risk mitigation measures to be taken in the case of materialisation of the identified risks.
In terms of the data governance requirements for developers of high-risk AI systems in Art. 10 of the EU AI Act the adoption of appropriate data governance practices shall include an examination of possible biases and a potential negative impact on fundamental rights. These specific requirements relating to high-risk AI systems are clearly aimed at the protection of fundamental human rights.
Having regard to these AI policy developments and the specific determinations in the EU AI Act, the question is if this provides adequate protection of human rights in the context of AI. It is argued that although the above developments are applauded, they are not enough. Some existing human rights have been given more or clearer content in the context of AI, for example the right to privacy. It is also possible that new digital rights could be developed and thus also warrant protection, for example a right to protect your online identity, a right to cyber security, or a right of protection of children against manipulation and abuse online. Technological developments require a fresh look at the scope and content of human rights relating to the development and deployment of AI. There should be an international initiative comparable to the Universal Declaration of Human Rights (1948) that provides clear recognition and protection of human rights in the context of AI, and which should have global application.