Ethical dilemmas posed by Artificial Intelligence

By Siani Morris

Long term worries about the impact of AI are being raised in the media. Yuval Noah Harari warns, for example, about the creation of a completely new culture (because AI can create new cultural artefacts), fake intimacy and the destruction of democracy.[ii] Ex-Google chief Dr Geoffrey Hinton left his job[iii] due to concerns about bad actors and existential risks due to things more intelligent than us taking control. The head of OpenAI, who developed GPT, is raising alarm more widely about the potential impacts of AI and calling for regulation.[iv]

Hype

It could be argued that most Big Tech leaders and AI developers say they’re aware of the risks associated with AI. For example, in March 2023, the Future of Life Institute published a letter[v] –signed by 30,000 people including Elon Musk[vi] [vii], Steve Wozniak (Apple co-founder), Dr Geoffrey Hinton (ex-Google AI expert) and Yuval Noah Harari (academic historian and bestselling author). They call on major AI developers to agree on a six-month pause of any systems more powerful than GPT-4 [viii] and to use that time to develop a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.

So we might assume that Big Tech leaders and AI developers will make sure nothing goes wrong.

In fact, the situation is complex and nuanced. Many Big Tech leaders do indeed stress the danger of anthropomorphic machines taking control and destroying civilisational; but this can be seen as AI hype designed to mask - and take attention away from - the very real problems that people are experiencing here and now from automated systems.

Ethical shortcomings of AI systems

The (non-)ethical behaviour of AI-assisted machines has been in the news not only from AI developers raising the alarm but also due to problematic cases that have come to light, such as bias in judicial decision-making[ix], bias in recruitment[x], chatbots that started to use racist language[xi], etc. Companies like Amazon are taking control of their customers’ (and other people’s) data and using it. More fundamental questions have been asked about whether any AI could ever be trusted with decisions that impact human life.[xii] [xiii] [xiv]

So, the rapid development of AI systems brings with it hitherto unthought-of ethical dilemmas.

It might technically be possible to overcome some of these if we code some of our ethical values into some of the AI systems and hope it alters their behaviour as we would wish. We could perhaps think of an evolutionary process allowing some ethical-like behaviour to emerge under favourable conditions, or to favour machines that might want to continue to exist, or it might happen that a machine could develop (from some kind of a reward system) a new sub-goal related to itself continuing to exist. But this might sometimes contradict what might be in the best interests for humans. The ethical problem remains.

An AI ethical vacuum

AI may be deployed in ways that involve privacy and security breaches, discriminatory outcomes and an adverse impact on human autonomy. For example, training data for AI systems may be obtained via unauthorised means: not only is there a privacy issue, but creative rights can be exploited. There are currently legal challenges to OpenAI about the way that it obtains and uses other people’s intellectual property. There are also issues with the truth of the outputs of the system: not only do algorithms replicate the biases of the training data, but there are issues concerning falsehoods being replicated and difficult to correct.

A central problem with large language models like ChatGPT is that humans can treat these systems as some kind of Oracle and mistake the system’s output for meaningful text and act accordingly. Information given may not be true and reputations may be ruined. Relevant actors within the environment in which the AI is embedded should anticipate and take account of the social impacts of technology post deployment but unfortunately this is often not done in a timely way: consideration should be given to job disruption, social justice, sustainability, and the effects on vulnerable groups, among other aspects. Many local councils in UK have had to abandon the use of facial recognition systems because of ethical and legal issues.

The way in which issues with algorithms within automated systems are dealt with, whether AI systems or other types of automated system, can be paramount in the avoidance (or otherwise) of harm.  For example, there is an ongoing miscarriage of justice that is currently in the news in which hundreds of sub-postmasters in the UK were wrongfully prosecuted for theft, false accounting and/or fraud. In 1996, International Computers Limited (ICL) began working on a computer accounting system, called Horizon, for the Post Office and the UK government. By 1999, ICL was part of Fujitsu, and Horizon was introduced, however it wrongly detected the existence of financial discrepancies at multiple post office branches. Investigations and legal cases have been held, with some compensation given, but the matter is still not completely resolved. Meanwhile, many hundreds of the sub-postmasters’ lives have been very adversely affected.

In June 2020, four working single mothers successfully defeated a court appeal by the Department of Work and Pensions following considerable hardship to them due to loss of Universal Credit income arising from a design failure (related to pay date clashes) within the automated system used in Universal Credit and the refusal to fix this.

Also in 2020, the exam regulator Ofqual downgraded almost 40% of the A-level grades assessed by teachers, which culminated in a government U-turn and the system being scrapped.

So, what should concern us is not so much the morality of AI but rather about the morality of the companies or other entities that develop and control AI; it is they who need to be obliged to act ethically.

Instead of worrying about imaginary digital minds we should focus on the current exploitative practices of companies that develop AI systems which increase social inequality and centralise power.[xv]

We should be building machines that work for us, not adapting society to the wishes of those few elites currently driving the AI agenda.

Those most impacted by AI systems, who include the most vulnerable in society, should have their opinions taken into account. 

A better approach - ethical AI

It Is vital that governments regulate to protect individuals’ rights and interests, and thereby shape the actions and choices of corporations.

Humans need to reach a shared understanding as far as is possible in a given context, based around social good and ethical standards.

A positive approach to ensuring that AI always operates in line with ethical standards would incorporate several elements. As a start, there would need to be a legal framework ensuring:

  • protection against exploitative working practices

  • strongly enforced ethical guidelines reflecting society’s priorities, and

  • transparency (including being transparent about the fact that AI is being used), so that people know what is going on and are able to make choices accordingly

  • accountability (of developers and deployers) , and  

  • humans in the loop so that the meaning and impact of automated decisions can be assessed and reviewed and it is not just a case of the computer saying ‘no’, particularly where the decision would significantly or adversely affect someone.

But how could this be achieved?

Ethical guidelines

There are different approaches to ethics - rules-based versus outcomes-based - and there are cultural sensitivities to take into account.

AI ethical standards are still quite immature, but there is a plethora of AI ethical guidelines. To quote just one example: ACM[xvi] recently proposed nine principles for responsible algorithmic systems: legitimacy and competency, minimising harm, security and privacy, transparency, interpretability, maintainability, contestability, accountability and limiting environmental impacts.

Ethical ecosystem

For any standards to have effect, they would need to be overseen and enforced. The extent to which the ethical impact of an AI system would be assessed, and the degree of properties such as transparency would need to be consistent with the AI system’s impact and associated public policy.

Good AI governance should be deployed across organisations, with strong oversight.[xvii]

Other types of AI systems, such as those causing exploitation, should just not be developed.

European legislators are finalising a new AI Act with embedded ethical protections; this will not apply to the UK, which is taking a much lighter touch approach to regulation since Brexit. The proposed EU AI Act is designed to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU”. It will “provide risk-based, legally binding rules for AI systems that are placed on the market or put into service in the Union”.

Ethical working practices

Ethical design should be employed at all stages of AI development and deployment[xviii]; e.g. in the deployment of AI, any potentially harmful effects should be mitigated, so[xix] if jobs are lost through AI[xx], society would need to set up rewarding activities for those put out of a job, and protect their income.

Conclusion

There is a huge potential for new AI tools that are truly good for humanity, especially in the medical field.  But technology can be used for good or for ill. For example, though phone tapping exists we probably wouldn’t like to do without the telephone; we rely on legislation based on ethical standards to protect us. The same could apply for AI.

See also the companion blog post “An ethical framework for AI and humanity?”


Notes

[i]             AI or Algorithms?

It is not just AI that is the problem here. When I recently asked for a definition of AI in chat GPT, I received the result ‘A field of computer science exploring the creation of intelligent machines capable of performing tasks that typically require human intelligence.’
But some issues are in common with a broader category relating to usage of algorithms to make decisions affecting people. In computing, an algorithm is a procedure for producing a defined result (e.g. performing a calculation), as distinct from a specific set of instructions (program) implementing that algorithm.

In JAAG we extend the term to encompass the whole system of procedures and rules followed by people as well as computers for a given purpose. We focus, in particular, on systems by which an individual is judged to merit receiving some kind of benefit, service, or privilege, or of being subject to some kind of charge or penalty. As harm can result from their deployment, even now, we should take care not to narrow down unduly the focus of consideration.

[ii]            Yuval Noah Harari argues that AI has hacked the operating system of human civilisation (economist.com)

[iii]           AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC News

[iv]            Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence - BBC News

[v]             Pause Giant AI Experiments: An Open Letter - Future of Life Institute

[vi]            Elon Musk among experts urging a halt to AI training - BBC News

[vii]           Musk co-founded OpenAI in 2015 but resigned from the board in 2018 and subsequently failed to take over the company when his bid was rejected. Now it is in partnership with Microsoft. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could become an oracle threatening the need for their search engines.

[viii]          GPT-4 stands for Generative Pre-trained Transformer 4 and is a multimodal large language model (LLM) created by the startup OpenAIreleased on March 14, 2023, that is the next version of the previous (GPT-3.5 based) ChatGPT and can take images as well as text as input.

[ix]           https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

[x]            https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[xi]           https://www.businessinsider.com/microsofts-tay-has-been-shut-down-after-it-became-racist-2016

[xii]           Krishnan, A., 2009. Killer robots: legality and ethicality of autonomous weapons. Ashgate Publishing Ltd.

[xiii]          Bjorgen, E. et al, 2018. Cake, death, and trolleys: dilemmas as benchmarks of ethical decision-making. In AAAI/ACM conference on artificial intelligence, ethics and society, pp. 23–29.

[xiv]          Misselhorn, C., 2018. Artificial morality. Concepts, issues and challenges. In Society, Vol. 55, No. 2, pp. 161–169.

[xv]           People like Timnit Gebru and Emily Bender are drawing attention to these issues. See for example their Stochastic Parrot paper On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency and subsequent article Statement from the listed authors of Stochastic Parrots on the “AI pause” letter (dair-institute.org) and Twitter thread: @emilymbender@dair-community.social on Mastodon on Twitter: "Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown. >>" / Twitter Also: A misleading open letter about sci-fi AI dangers ignores the real risks (substack.com)

[xvi]          This was based upon the Association for Computer Machinery's Code of Ethics and Professional Conduct - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018. The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way. Additionally, the Code serves as a basis for remediation when violations occur. The Code includes principles formulated as statements of responsibility, based on the understanding that the public good is always the primary consideration.

[xvii]         cf. IAPP Privacy and AI Governance Report

[xviii]        See for example UK Government Data Ethics Framework (publishing.service.gov.uk)

[xix]          See for example: Understanding artificial intelligence ethics and safety - A guide for the responsible design and implementation of AI systems in the public sector by David Leslie from the Alan Turing Institute.

[xx]           Or rather when…for example, BT estimate up to a fifth of its jobs will be lost due to AI by the end of the decade: BT to cut 55,000 jobs with up to a fifth replaced by AI - BBC News

Previous
Previous

AI perpetuates racial stereotypes

Next
Next

An ethical framework for AI and humanity?