The European Commission’s Proposal for the First-Ever Binding Legislation on Artificial Intelligence – Landmark Rules with Far-Reaching Effects
On 21 April 2021, the European Commission presented its long-awaited proposal for the first-ever legislation on artificial intelligence (AI) – which will have far-reaching effects on any organisations developing, deploying or using AI technologies. By introducing this legal framework, the Commission aims to guarantee the safety and fundamental rights of people and businesses in relation to AI systems placed on the EU’s market, and at the same time to increase AI uptake, investments and innovation in the EU.
Why is this of relevance?
The proposed Regulation on a European Approach for AI follows a risk-based approach. It includes a ban on specific AI systems posing “unacceptable risk”, and introduces strict requirements and obligations on a larger number of “high-risk” AI systems. A dedicated EU regulatory authority will enforce compliance, and the framework foresees hefty fines for infringements. The draft Regulation will have significant impact on a range of sectors, including health, energy, transport, agriculture, tourism and cyber security, in which AI systems are being developed or used. Importantly, it will also cover AI systems placed on the EU market from third countries.
On the basis of the Commission’s proposal, the European Parliament and Member States will need to adopt a final text of the Regulation. The Regulation will then become directly applicable across the EU.
Key elements of the Draft Regulation on AI
- The objective is to introduce one set of rules for the placing of AI systems on the EU market, their deployment and their use in the EU.
- The draft legislation uses a broad definition of AI systems as “software that is developed with one or more of the techniques and approaches listed in Annex I [ed.: machine learning, logic- and knowledge-based approaches, and statistical approaches] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environment they interact with”.
- The Commission distinguishes between four levels of risk posed by AI systems:
- “Unacceptable risk” – AI systems regarded as a “clear threat to the safety, livelihoods and rights of people” will be prohibited. Examples are AI systems or applications that make use of “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm” (such as toys using voice assistance, encouraging dangerous behaviours of children; and social scoring systems by governments).
- “High-risk” – AI systems in this category will need to fulfil specific, strict requirements and obligations for operators, including risk assessment and mitigation systems, high-quality of data sets to minimise discrimination, logs to ensure traceability of results, , documentation and user information, as well as human oversight, robustness, security and accuracy. The category of “high-risk” AI systems includes AI systems in critical infrastructures such as transport, education, safety components of products (e.g. AI in robot-assisted surgery), employment (e.g. CV-sorting software), essential private and public services (e.g. credit-scoring), law enforcement, migration, asylum and border control management, and democratic processes. Remote biometric identification also forms part of this category. Any use of remote biometric identification in public spaces by law enforcement is prohibited in principle – with exceptions including the prevention of an imminent terrorist attack or a serious criminal offence.
- “Limited risk” – AI systems in this category underlie certain transparency obligations. This concerns AI systems interacting with humans, emotion recognition systems, biometric categorisation systems, as well as AI systems that generate or manipulate image, audio or video content. When using chatbots, for instance, users should need to be made aware that they are interacting with a machine.
- “Minimal risk” – A large majority of AI systems are considered as posing “minimal risk” to citizens’ rights or safety, on the basis of the draft Regulation. This category includes AI-enabled video games or spam filters. The planned legislation will not introduce additional rules for these systems, so that they can be freely used.
- Compliance will be supervised by the national market surveillance authorities. A European Artificial Intelligence Board will be established to support implementation and drive the development of AI standards. The draft Regulation additionally calls for the commitment to voluntary codes of conduct for non-high-risk AI.
- The Commission proposes so-called regulatory sandboxes – frameworks for testing innovation and regulation - to facilitate “responsible innovation”.
- Infringements of the new rules can result in significant fines. In particular, non-compliance with the ban on AI systems considered as posing “unacceptable risk” or with the data governance and management requirements can draw administrative fines of up to EUR 30 million or up to 6% of a company’s total worldwide annual turnover.
Background
The draft AI Regulation forms part of a wider package of measures, including a new Machinery Products Regulation (Proposal for a Regulation on machinery products) that aims to adapt safety rules, as well as a new Coordinated Plan with the EU Member States to implement concrete actions.
Margrethe Vestager, the Commission’s Executive Vice-President for a Europe fit for the Digital Age, underlined the EU’s ambition to become a global standard-setter for AI – as has been the case with the EU’s General Data Protection Regulation (GDPR): “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.” In order to achieve the competitiveness that the EU is – in some respects – still lacking compared to the US and China as global AI powerhouses, the Commission is convinced to be putting in place “future-proof and innovation-friendly […] rules” that “intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
Next steps
Under the ordinary legislative procedure, the European Parliament and Council (Member States) will next discuss the Commission’s legislative drafts. Following their adoption, both the AI Regulation and the Machinery Products Regulation will be directly applicable in the EU Member States. At the same time, Member States will be implementing the actions set out in the Coordinated Plan.
Given the prominence and reach of the file, it can be assumed that the legislative procedure towards the adoption of both Regulations will take a number of months, with many interested parties seeking to influence certain provisions. Organisations developing, deploying or using AI systems (broadly defined) in the EU will be affected by the new legislation, and should engage in the process to ensure that their interests and concerns are taken into account.