The European Commission could decree a temporary ban on facial recognition technologies used in both the public and private sectors, according to a “white paper” on artificial intelligence.
The disclosure of the Commission’s white paper draft follows a period of public debate on how to address future challenges of artificial intelligence.
The application of facial recognition raised many concerns in Europe last year, after the Swedish data protection authority fined a municipality 20,000 euros for using this technology in monitoring school attendance. Meanwhile, the French data regulation authority, the CNIL, stated that facial recognition violated the consent rules of the General Data Protection Regulation.
If the Commission’s plans come to fruition, several projects launched by member countries will have to be halted, such as Germany’s preparation to introduce automatic facial recognition in 134 train stations and 14 airports. France also intends to establish a legal framework enabling video surveillance systems to perform facial recognition.
The Commission’s document, which provides an overview of proposals for developing a European approach to artificial intelligence (AI), stipulates that a future regulatory framework could “include a temporary ban on the use of facial recognition techniques in public spaces.”
The text adds that “the use of facial recognition techniques by public or private actors in public spaces would be prohibited for a defined period (e.g., 3 to 5 years), during which a rigorous methodology to assess the impact of this technology and potential risk management measures could be identified and developed.”
Five Regulation Options
Beyond the issue of facial recognition, the first draft of the white paper, whose final version is expected to be published in February by the Commission, presents five options for regulating artificial intelligence at the European level.
The Commission warns against the potential abuses of artificial intelligence
The European Commission’s high-level group on artificial intelligence published its report on ethics and AI. It highlights issues related to identification, citizen scoring, and killer robots.
The different regulatory sectors examined in the document are: voluntary labeling; sectoral standards concerning public administration and facial recognition; mandatory risk-focused standards for high-risk applications; safety and responsibility; governance.
A voluntary labeling mechanism could take the form of a legal instrument allowing developers to “choose to comply voluntarily with the standards of ethical and trustworthy artificial intelligence.” If compliance with these standards were guaranteed in this field, an “ethical or trustworthy artificial intelligence label” would be awarded, with binding conditions.
The second option addresses a matter of general interest: the use of artificial intelligence by public authorities, as well as the use of facial recognition technologies in general. In the former domain, the document indicates that the EU could adopt an approach similar to that taken by Canada in its Directive on Automated Decision-Making, which establishes minimum standards for ministries wishing to use an automated decision system.
Regarding facial recognition, the Commission’s document highlights the provisions of the EU General Data Protection Regulation, which give citizens “the right not to be subject to a decision based solely on automated processing, including profiling.”
In the third sector that the Commission is poised to regulate, legally binding instruments would only concern “high-risk applications of artificial intelligence.” The document states that “this risk-based approach would focus on situations where the population is in danger or when an important legal interest is at stake.”
Healthcare, transportation, law enforcement, and the judiciary are all potentially high-risk sectors, according to the document. The Commission adds that for an application to be considered “high-risk,” it must meet one of the following two criteria: fall within a high-risk sector or have potential legal ramifications and pose “a risk of injury, death, or significant material damage to the individual.”
The fourth option covers safety and responsibility issues that could emerge in the future development of artificial intelligence. It suggests that “targeted changes” could be made to EU legislation on safety and liability, including the General Product Safety Directive, the Machinery Directive, the Radio Equipment Directive, and the Product Liability Directive.
According to the document, risks not covered by existing legislation include “cyber threats, risks to personal security, privacy, and the protection of personal data.” These could be subject to future amendments.
Regarding responsibility, “adjustments may be necessary to clarify the responsibilities of AI developers and distinguish them from those of product producers.” The scope of the legislation could also be modified to determine whether AI systems should be considered “products.”
Concerning the fifth option, which addresses governance, the Commission emphasizes that an effective implementation mechanism is essential. This implies being able to rely on a strong oversight system of the public sphere with the participation of national authorities. Additionally, it would be necessary to promote cooperation among these national authorities, the document states.
It specifies that the most likely approaches to be formally adopted are a combination of options 3, 4, and 5.
“The Commission could consider combining a horizontal instrument defining transparency and responsibility requirements and also covering governance, with targeted changes to existing Community legislation on safety and liability,” the document specifies.