JAAG EXPLAINER N° 3

What ethical principles should apply to AI?

Because AI is used in many ways that affect us all, AI has to be ethical.

In 2021 the European Union used the terms “ethics by design and ethics of use” in a guidance note for research and innovation projects, with the aim of creating an “ethically-focussed approach” to the design and deployment of AI systems. The ethical principles put forward for researchers in the EU document as a “must have” for the development and use of AI systems are:

  • respect for human agency;

  • privacy, personal data protection and data governance;

  • fairness;

  • individual, social, and environmental well-being;

  • transparency;

  • accountability and oversight.

Organisations that represent technology professionals, such as the British Computer Society (UK), the ACM and IEEE (US) and IFIP (International Federation of Information Processing) offer guidance specific to that sector, focussing on the responsibilities of IT professionals – not the technology. The objective is that the principles result in technologies that do not harm people and society, for example:

  • Ensure that the public good is the central concern

  • Avoid harm

  • honest and trustworthy

  • fair and trustworthy

  • respect privacy

  • honour confidentiality.

Many of these principles cover fundamental human rights. Another analysis of principles for AI in society[2] has settled on five unifying principles:

  • Beneficence (social good, including promoting wellbeing, preserving dignity and sustaining the planet).

  • Non-maleficence (not doing harm).

  • Autonomy (the power to make, or delegate, decisions).

  • Justice (promoting prosperity, preserving solidarity and avoiding unfairness).

  • Explicability (transparent processes, capabilities and purpose of AI systems being openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected).

Guidelines

Many different AI guidelines have been produced, including:

Of particular note are the following:

The EU Ethics Guidance for Trustworthy AI, which underpins the European regulation of AI, says that users, developers and deployers of AI should:

  1. Develop, deploy and use AI systems in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness and explicability, and acknowledge and address the potential tensions between these principles.

  2. Pay attention to vulnerable groups, persons with disabilities and others that have historically been disadvantaged or are at risk of exclusion, and to asymmetries of power or information, such as between employers and workers, or between businesses and consumers.

  3. Acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impacts, including impacts which may be difficult to anticipate, identify or measure, for example on democracy, the rule of law and distributive justice, or on the human mind itself. Adopt adequate measures to mitigate these risks when appropriate, and proportionately to the magnitude of the risk.

Although there is still a need for further elucidation of how high level guidelines would map down to lower level techniques that fulfil these, one of the most mature guidelines for AI developers to date is the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which has produced several versions of their IEEE Ethically Aligned Design document. This encourages technologists to prioritise ethical considerations in the creation of autonomous and intelligent technologies. It considers from a ‘neoclassical’ engineering perspective how both classical Western ethical approaches as well as non-Western approaches may be taken into account to benefit AI development and deployment. More than 1,000 people contributed to the resulting documents. It sets out over 100 ethical issues and recommendations, including related to:

  • encouraging human well-being to be central to an ethical approach when developing AI systems

  • providing well-being metrics that allow the benefits from technological progress to be more comprehensively evaluated

  • embedding values into autonomous and intelligent systems, dependent upon the specific norms of the community in which they are to be deployed

  • clarifying how autonomous and intelligent systems that participate in or facilitate human society should not cause harm by either amplifying or dampening human emotional experience

  • highlighting the importance of transparency and explainability, whilst proposing and developing related mechanisms and measures.

In addition, standardisation in the AI ethics is still relatively immature, although the IEEE and others have produced some standards on AI ethics including:

Sources:
For more detail see:

https://www.acm.org/code-of-ethics;
https://www.ipthree.org/wp-content/uploads/IFIP-Code-of-Ethics.pdf.
https://www.bcs.org/articles-opinion-and-research/what-are-ethics-in-ai/
https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf and
A Unified Framework of Five Principles for AI in Society · Issue 1.1, Summer 2019 (mit.edu)