JAAG EXPLAINER N° 1

Why should I be concerned about potential harms of AI?

AI is very different from other technologies we are used to, and brings with it risks and the potential for harm.

There is a long list of reasons why people are concerned about AI. They include breaches of privacy and security, discrimination and a negative impact on human autonomy. For example, people are concerned about:

  • Unauthorised means for collecting, processing or disclosing personal data.
    This can harm a person’s privacy or exploit their creative rights.

  • The truth of the outputs of the AI system.
    Algorithms are ‘trained’ on huge sets of data. Very often that data is biased, inaccurate, or otherwise non-representative. Algorithms then replicate these biases or errors, which then become difficult to detect and correct. With large language models (e.g. ChatGPT) humans may think that the system’s output is meaningful text and act accordingly, but the information given may not be true and reputations may be ruined.

  • Lack of explainability.
    For example, due to opaque machine learning decision-making or insufficient documentation about what factors the decision was based upon.

  • Lack of traceability.
    i.e., inadequacies in providing a complete account where the data came from, the processes used, and the artefacts involved in the production of an AI model. Technology supply chains are more and more complex.

  • Insufficient understanding of the social impacts of technology post deployment.
    These can include job disruption, negative impacts on social justice, poor sustainability, and negative effects on vulnerable groups.

One recent example of the negative effects that AI systems can have on people’s lives is the following.  

“Clearview AI Inc. has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”

Given the high number of UK internet and social media users, Clearview AI Inc’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge. Although Clearview AI Inc no longer offers its services to UK organisations, the company has customers in other countries, so the company is still using personal data of UK residents.

AI can, of course, be very useful for good purposes; undertaking rapid assessment of medical images to speed up diagnoses is one example.

Source : https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/