Edition: International | Greek
MENU

Home » Europe

AI Act: MEPs extend ban on social scoring, reduce AI Office role

The ban on social scoring has been extended to private companies, regulatory sandboxes could be used to demonstrate compliance, and the AI Office’s role has been downsized in a whopping new set of compromise amendments to the upcoming AI Act

By: EBR - Posted: Tuesday, February 28, 2023

The AI Act prohibits specific AI uses that pose too great a risk. This politically sensitive category includes social scoring, a pervasive population control system employed in parts of China. The ban on social scoring has been extended to include private entities.
The AI Act prohibits specific AI uses that pose too great a risk. This politically sensitive category includes social scoring, a pervasive population control system employed in parts of China. The ban on social scoring has been extended to include private entities.

by Luca Bertuzzi

The ban on social scoring has been extended to private companies, regulatory sandboxes could be used to demonstrate compliance, and the AI Office’s role has been downsized in a whopping new set of compromise amendments to the upcoming AI Act.

On 15 February, EU lawmakers met for a ‘marathon’ negotiation session on the act, a landmark legislative proposal to regulate Artificial Intelligence based on its potential risks. While no overall political agreement was reached, MEPs provided extensive feedback.

This feedback was reflected in revising almost all the batches of compromise amendments that the European Parliament’s co-rapporteurs Brando Benifei and Drago? Tudorache shared last Friday (24 February) and were obtained by EURACTIV.

Prohibited practices

The AI Act prohibits specific AI uses that pose too great a risk. This politically sensitive category includes social scoring, a pervasive population control system employed in parts of China. The ban on social scoring has been extended to include private entities.

Notes at the margin of the document anticipate that the bans on biometric categorisation and subliminal techniques will be elaborated in the text’s preamble, particularly in the case of psychological manipulation to specify the relation with advertising.

Governance and enforcement

The role of the AI Office, an EU body proposed to centralise the regulation’s enforcement, has been significantly downsized, possibly to address the concerns about the lack of resources. The AI Office will support joint investigations.

However, in the most extreme cases, where a non-compliant AI system leads to significant risks at the national level, it will be on the Commission to play a vital role in consulting with the authorities involved and issuing a binding decision in case of disagreement.

Nevertheless, if the AI application poses a severe risk despite following the EU’s AI rulebook, that role would be played by the AI Office. This inconsistency is likely to result from the political negotiations.

The AI Office has been tasked with collecting expertise and sharing best practices among EU countries, including by setting up a European pool of experts, an idea that originates from the data protection field and that was taken on board already in the text of the EU Council.

The EU lawmakers also propose giving the possibility to bring representative actions for breaches of the AI regulation.

Regulatory sandboxes

New wording has been added requiring the authorities establishing the sandboxes to guide and supervise developers of high-risk AI systems so that they can be presumed to comply with the AI Act when they exit the sandbox.

High-risk obligations

The testing of high-risk AI systems is now required to consider the intended purpose and the reasonably foreseeable misuse. The categories that should be given particular consideration in assessing risk have been limited to vulnerable groups and children.

The provision that required AI developers to verify that the datasets used to train their model was legally obtained has been removed. This provision would have affected large language models like ChatGPT, which are trained by scraping massive amounts of data from the internet.

Still, a note to the margin of the text states that the text’s preamble will specify these data collection processes have to be in line with trade secrets, Intellection Property and data protection rules. The reference to IP rights seems addressed to protect creative industry as its artwork is being used to feed into generative AI.

References to the principles of data minimisation and data protection by design and default have been removed from the data governance requirements. The technical documentation to be provided has been slightly simplified.

Regarding quality control, the compromises clarify that the AI developers that already have a quality management system in place in compliance with international standards could adapt the existing systems to the AI regulation’s requirements.

Scope

The compromise clarifies that open source AI models do not fall in the scope of the regulation, except if they are placed on the market or put into service as part of a larger high-risk system, a prohibited practice or a system producing deep fakes.

Fundamental rights impact assessment

The co-rapporteurs’ proposal to introduce a fundamental rights impact assessment for AI uses that fall under areas deemed at high-risk has been maintained. However, the minimum elements were slightly reduced, and a carve-out for systems that managed critical infrastructure was added.

Conformity assessment recognition

AI solutions, especially those that entail a significant level of risk, will have to undergo a conformity assessment procedure. EU lawmakers want the Commission to establish mutual recognition agreements with foreign countries with comparable conformity assessment levels.

Safety component

The AI systems that act as safety components of products covered by EU legislation, like toys and industrial machinery, are deemed at high risk of causing harm. The definition of safety components was amended to limit it to preventing risks to individuals’ health and safety.

Technical standards

The leading MEPs wan the Commission to issue common specifications concerning the requirements for high-risk systems related to protecting fundamental rights. These common specifications would be repealed once included in the relevant technical standards.

*first published in: Euractiv.com

READ ALSO

EU Actually

Border controls are the new normal in the Schengen area

N. Peter KramerBy: N. Peter Kramer

Prime Minister, Michel Barnier, announced that France will control all its borders for illegal immigration from November 1

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

Italy divided over Fitto securing a European Commission spot

Italy divided over Fitto securing a European Commission spot

EU lawmakers’ approval of Raffaele Fitto as one of the European Commission’s new executive vice presidents has provoked mixed reactions in Italy

Business

Value-based trade policies are on the rise- Here’s what businesses need to know

Value-based trade policies are on the rise- Here’s what businesses need to know

Trade policy is no longer just there to promote efficiency and productivity in the flow of goods and services

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron