The European Commission has today, on 21st April 2021, released its draft for a Regulation on Artificial Intelligence (AI).
Most of the requirements are in line with the recommendations from the European Commission’s High Level Expert Group on AI, and proposals from the German Data Ethics Commission in 2019. In particular, the draft Regulation, which was leaked early, confirms that a risk-based approach to regulating AI systems will be introduced, and that it will affect both developers (companies that ‘put systems onto the market’) and companies that use such systems (‘put systems into service or use’).
Risk-based regulation of AI
In particular, the classification of the ‘risk’ of a system will be made with respect to its purpose, specific context and conditions of use, and the potential for it to cause “harm” such as through the injury or death of a person, damage of property, or on fundamental rights (such as the right to privacy or freedom of expression).
Specific considerations will also be required for the intensity of the harm, its ability to affect large groups of people, the ability of individuals to opt-out to the use of the system, and the imbalance of power between the organisation deploying the AI system and the individuals subject to it. Under this approach, the potential harms of an AI system should be considered when deciding what legal or technical controls are required to be in place around its deployment.
For example, in similar language to the GDPR, the Draft Regulation would require organisations deploying AI systems to:
- Carry out an ex-ante risk assessment of the system
- Deploy risk control management measures “based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art”, and
- Carry out ongoing monitoring of the risks associated with the system.
There is also specific transparency obligations (similarly to a GDPR privacy notice) on the purpose, effects and data used in AI systems, and specific regulation of the quality, accuracy and possible biases of data used in systems.
High risk prediction systems
On top of this, several ‘high risk’ cases for AI are explicitly called out, such as recruitment systems, creditworthiness assessments, and those involved in the prevention, detection and prosecution of crime. For example, systems such as PredPol or Palantir that have been used by law enforcement authorities to determine ‘crime hotspots’ in the UK and USA.
Moreover, “systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people” are also included, which on a wide reading could also include online advertising and targeting models used by Facebook, Google and other social media platforms, particularly if use for political marketing, for example (cf. Cambridge Analytica). Such ‘high risk’ systems would, under the Draft Regulation, be subject to stricter requirements, including approval from supervisory authorities, which would be required to consider the “likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities”. It considers the potential for bias or discrimination – before giving their approval.
Other uses have been prohibited altogether, such as the use of AI or facial recognition systems for mass and indiscriminate surveillance (i.e. communications surveillance) and those carrying out social scoring (claimed to have been used by the Chinese authorities), and the use of biometric identification systems in publicly accessible spaces for law enforcement purposes, subject to some exceptions.
Future Regulation?
If it passes, the Draft Regulation is a seismic step in data ethics and would go further than any other regimes in the world in constraining the usage of AI systems, and require consideration from both developers (e.g. Palantir, Alteryx, Hirevue) but also companies using AI in applications ranging from applicant scoring to online advertising and creditworthiness assessments.
Similar to the GDPR, under the Draft Regulation, fines of up to 4% of global annual turnover (or €20M, whichever is the higher), are mooted for a breach of the Draft Regulation’s provisions. As the rules would apply to companies based in the EU and abroad, the UK and US would also be under pressure to produce similar regulation.
Related Content
Read our latest insights into the Cyber and Digital sector by clicking the link below.