JAAG EXPLAINER N° 4

How can the ethics of an AI system be assessed?

Because AI is used in many ways that affect us all, AI has to be ethical. But to assess this?

Ethical thinking

Organisations deploying AI need to be aware of the ethical aspects of AI, and how they have been addressed. Understanding the processes involved in creating the system can help in assessing it. Frameworks have been developed focussing on ethical AI:

https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

https://www.zyen.com/media/documents/ethical-assessment.pdf

The first question to consider is the broad impact of the system – who benefits and who loses, and what is the balance of the two options? For example, shareholders may benefit, other people may suffer, as demonstrated in the Cambridge Analytica scandal.

Clarity about the purpose and objective of the AI system is vital in formulating the requirements of the system, as well as checking that those requirements can be met. Bear in mind that if some part cannot be met, it may have an ethical impact.

Data and Algorithms

It is the components of the system that have a ‘real-life’ effect – the data and the algorithms that will work on the data to provide an outcome. Making sure both are correct, in terms of what they bring and what they do, is crucial.

  • Data:  the data provides the foundation for an AI system. Where it comes from and whether it is correct, valid for the purpose, and useable (in terms of permission to use) are factors that need to be considered, not just in terms of system design, but also from an ethical and legal perspective.
    AI does not make ethical decisions, but the results it produces can have an ethical impact. There may be cases where data is missing. If this happens, ‘proxy’ data can sometimes be used; for example, if someone’s age is needed, but no age is given, an ‘educated guess’ is an option. An example could be a person at university with no age given, but they are in their first year. However, this same presumption does not necessarily always apply as it depends on context and inaccuracies may be introduced. For example, if you know that someone is a first-year student at university, then a guess about their age might be different depending upon where they live, where the university is, and whether, for example, they are a mature student. In Scotland, for example, students typically start university a year earlier than in England, and in India, students commence studies between the ages of 17 and 19.

  • Algorithms: there are many types of algorithms available, and some may be specifically created for a particular system. The choice of algorithm will have an impact on how the system works. It is, therefore, important to understand, in simple terms, the purpose of the algorithm. Faulty inputs (data) and a failure to test will result in flawed algorithms.

Fit for the Context

AI systems are designed to provide results; some are designed to recommend actions - these are decision support systems. There is a difference between a system giving advice (recommendation) and a system that can only give a ‘yes/no’ answer, or, in certain cases, decide autonomously, for example autonomous cars and robots.

There are many contexts in which AI can be used, and the way that the system is designed needs to be ‘fit for the context’ (domain) that it applies to. It is important that whoever has designed the system has domain knowledge or, in the early stages of planning, has been advised by a person with domain knowledge.

Testing the system for correctness

This involves consideration of fairness /bias /unconscious bias, privacy /security, meeting explanatory and transparency requirements, as well as legal requirements. Specialised techniques are available for meeting these requirements. Verification that these checks have taken place is important, and that any unexpected issues have been satisfactorily addressed. As regards using the results, visual analytics can be applied to make sure that the visualisation of the results is honest and fully represents the outcome.

Building Trust

Building trust in AI needs to include both the people and institutions behind the technology and AI (those selling, making, using it), and the technology of AI systems and solutions. See for example  EU Ethics Guidance for Trustworthy AI, which underpins the following European regulation of AI.

Certification

Certification against agreed standards can be a useful part of this process, as appropriate levels of checking can be carried out by trusted entities. AI increases the challenges for firms, including with regards to building trust.

Governance

Governance is an important aspect of AI, so organisations need to put in place, or extend, governance structures. These include risk assessment and mitigation, policies relating to legal and ethical use of AI that emphasise the need for transparency, accountability and human-centred design throughout the development and implementation process, employee education, records of procedures and decisions relating to AI, and effective oversight of these by senior management and ethical AI experts, together with remediation and redress. This can be an extension of privacy and accountability management programmes that may already be deployed for data protection purposes, taking into account the issues raised above (see, for example, AI Governance Center (iapp.org)).

Further specific guidance for companies considering buying AI products has been produced by the World Economic Forum (WEF). Adopting AI Responsibly: Guidelines for Procurement of AI Solutions by the Private Sector | World Economic Forum (weforum.org)

When designing and running a governance system, covering the development, purchase and/or deployment of AI systems, the following aspects should be addressed as part of the process, in order to make guidelines or ethical codes effective and to change the behaviour of professionals:

  • Education and empowerment.
    These are key aspects to address, so that employees know what they should do and feel empowered to actually do it. Relevant to this, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems’ mission is “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity”.

  • Provide loyalty to guidelines.
    Provide incentives and penalties so that the practice of using AI systems complies with the principles set out in the various ethical guidelines. Ensure that there are consequences for deviating from the code of ethics.

  • Avoid ethics washing.
    Ensure that when ethics is integrated into institutions, it is not mainly serving as a marketing strategy or a weaker, vaguer or minimal guidance being selected, and that the guidance followed is best practice, actionable, monitored and enforced.

  • Make ethics a priority.
    Take measures to ensure that economic incentives will not override commitment to ethical principles and values. The purposes for which AI systems are developed and applied should be in accordance with societal values or fundamental rights such as beneficence, non-maleficence, justice, and explicability.

  • Encourage ethically motivated efforts to improve AI systems.
    This includes being aware how in some fields you can use technical “fixes” for specific problems, such as accountability, privacy protection, anti-discrimination, safety, and explainability. Provide information on appropriate choices and record what has been used and why. Information at a micro ethics level can be built up into a bigger picture.

Conclusion

AI has the potential to further transform services, but this needs to be done in a way that takes into account the risks that AI innovation poses.