The European Commission on Wednesday proposed rules for regulating artificial intelligence. The policy proposal mentions, among other things, the use of facial recognition or ‘social scoring’.
In the policy proposal , use cases for artificial intelligence are divided into four different risk categories, each with its own set of rules. The Commission is referring , among other things, to a category of AI systems that would pose an ‘unacceptable’ risk to the fundamental rights and safety of people.
AI from that category would be completely banned if the proposal is passed. The EU cites as an example ‘social scoring’ systems, with which governments can give individual residents a score based on their behavior. The Commission also calls ‘toys that can encourage dangerous behavior of minors through voice assistance’ to be unacceptable.
Directly below is a category with artificial intelligence that poses a high risk. In this risk group, the Commission mentions, among other things, AIs that would be used for ‘critical infrastructure’ such as public transport. The EU also mentions software used for employment, such as managing employees and sorting resumes for selection procedures, AI for robotic-assisted surgery, or software for managing migration, asylum applications or border control.
Artificial intelligence for authorities also falls under this category. This also applies to all forms of biometric identification such as facial recognition that fall under this category; Their use in public spaces by authorities is in principle prohibited, but there could be limited exceptions, for example to trace a missing child or to prevent a specific and immediate terrorist threat.
All different aspects of AIs classified as high risk would be tightly regulated. The datasets used to train them must be of ‘high quality’ to minimize the risk of discrimination. Also, among other things, the activity of the artificial intelligence must be tracked to ensure the traceability of the results, there must be appropriate measures for human oversight and the AIs must be provided with detailed documentation so that authorities can assess whether the system complies with the requirements. .
Chatbots and spam filters
Third, the Commission names a list of AI systems that pose a limited risk. The European Commission is referring to artificial intelligence with ‘transparency obligations’, such as chatbots. The risk of this is low, although “users should be aware that they are on a call to a machine so that they can make an informed decision to continue or end the call.”
The list ends with a group of artificial intelligence that pose a minimal risk to humans, which will include ‘most AI systems’. For example, the Commission mentions games with AI and spam filters for email services. The proposed regulations have no influence on these AI systems, according to the policy proposal.
Implementation of the policy proposal
With regard to the implementation of these proposed rules, the European Commission proposes that national competent market surveillance authorities within Member States monitor the new rules. A European Council for Artificial Intelligence should facilitate the implementation of the rules and drive the development of artificial intelligence standards. In addition, voluntary codes of conduct would be proposed for high-risk AI, along with regulatory sandboxes to facilitate “ responsible innovation ” in AI.
“AI is a means, not an end. It has been around for decades, but has now reached new capabilities fueled by computing power,” said European Commissioner Thierry Breton in an explanation of the proposal. Today’s proposals aim to strengthen Europe’s position as a global center of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and boost the potential of AI for industrial use. utilize.”
It is not yet clear if and when the policy proposal will be implemented. The proposal has yet to be evaluated by the European Parliament and the Council of the European Union. It is therefore likely that the EU will need a few more years to discuss and implement the possible laws.