JAAG EXPLAINER N° 5

The EU AI Act – a model for controlling AI worldwide?

The European Union (EU) is close to adopting new legislation – the Artificial Intelligence (AI) Act – which will further control the development and use of AI. It focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors, including finance.

Because the EU, along with the USA and China, is one of the big three powers in AI and other technologies, and because companies wishing to trade with the EU will have to respect the new law, there is a possibility that the terms of this new act will also influence what happens in the rest of the world.

This Act was influenced by the European Commission’s High-Level Expert Group on Artificial Intelligence, which produced Ethics Guidelines for Trustworthy AI. Based upon ethical principles and human rights, they defined a number of principles that an AI system should have in order to be trustworthy:

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. Proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop and human-in-command approaches.

  2. Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fallback plan in case something goes wrong, as well as being accurate, reliable and reproducible.

  3. Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.

  4. Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help, and AI systems and their decisions should be explained in a manner adapted to the stakeholders concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.

  5. Diversity, non-discrimination and fairness: unfair biases must be avoided. AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life cycle.

  6. Societal and environmental wellbeing: AI systems should benefit all human beings, including future generations. They should be sustainable and environmentally friendly, and their social and societal impact should be carefully considered. 

  7. Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes auditability of algorithms, data and design processes.

The European Parliament and the European Council (comprising ministers from all Member States) have reached broad agreement in outline on the final text.

The Act will use a classification system to determine the level of risk that an AI technology could pose to someone’s health and safety or fundamental rights, as follows.

  • Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will not be subject to specific obligations as they present only minimal or no risk for citizens' rights or safety. Companies may voluntarily commit to additional codes of conduct for these AI systems.

  • High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.
    Examples of high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

  • Unacceptable risk: AI systems considered a clear threat to people’s fundamental rights will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour or systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

  • Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

Fines.

Companies not complying with the rules will be fined. Fines would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information. More proportionate caps are foreseen for administrative fines for SMEs and start-ups that infringe the AI Act.

General purpose AI.

The AI Act introduces dedicated rules for general purpose AI models that will ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations to manage risks, to monitor serious incidents, to evaluate models and adversarial testing. These new obligations will be implemented through codes of practice developed by industry, the scientific community, civil society and other stakeholders together with the European Commission.

Governance.

National market surveillance authorities will supervise the implementation of the new rules in each country, while a new European AI Office will ensure coordination. It will be the first body in the world that enforces binding rules on AI and it is therefore expected to become an international reference point. For general purpose models, a scientific panel of independent experts will issue alerts on systemic risks and help to classify and test the models.

In a series of technical meetings government officials and aides of lawmakers hash out key details such as the scope of the law and how it will work. The EU Council and Parliament have reached provisional agreement on the text of the Act.