by Luca Bertuzzi and Oliver Noyan
The European Commission believes that its proposed Artificial Intelligence Act should become the global standard if it is to be fully effective. The upcoming AI treaty that is being drafted by the Council of Europe might help the EU achieve just that.
In April the European Commission launched its proposal for an Artificial Intelligence Act (AIA). Structured around a risk-based approach, the regulation introduces tighter obligations in proportion to the potential impact of AI applications.
Commissioner Thierry Breton argued that “one should not underestimate the advantage of the EU being the first mover” and emphasised that the EU is the main “pacemaker” in regulating the use of AI on a global scale.
In a similar vein, the Commission’s director-general for communications networks, content and technology, Roberto Viola said that “equilibrium is key to have a horizontal risk-based approach in which many voices are heard to avoid extremism and create rules that last. AI standardisation needs contributions from like-minded countries. I’m sure then AI Act will become a global standard.”
For Viola, setting the gold standard for AI rules in the world is not just part of the Commission’s ambition to establish itself as a global tech regulator. It is a requirement for the AI Act to work in practice, otherwise “many of the things which are in the regulation will not function properly.”
He argued that Europe’s balanced legislative approach enables EU law to become an international point of reference for similar legislation, drawing the parallel with GDPR, the EU’s flagship data protection law. However, three years after its entry into force, the GDPR’s enforcement is still proving problematic.
France’s state secretary for digitalisation Cedric O is determined not to repeat the same mistakes, arguing that in the EU’s upcoming AI framework “we must learn from insufficiency of the GDPR and focus on harmonisation, flexibility and balance.”
The key challenge of the EU’s AI legislation is to strike a balance between ensuring product safety and legal remedies while avoiding clipping the wings of innovation.
However, the Commission proposal on conformity assessment for high-risk applications is considered “concerning” by Brando Benifei, the European Parliament’s rapporteur on the file. Benifei noted that all but three types of high-risk applications can be launched on the market with just an internal assessment done by the developer.
“Seeing how impactful these use cases are to people lives and fundamental rights I wonder if it is right to go down this path and only find out about defective or non-compliant system by markets checks when it is too late and damage has been done,” the Italian lawmaker added.
While the EU proposal is more focused on consumer protection, the human rights body Council of Europe (CoE) is working on an international agreement that would introduce safeguards on human rights, the rule of law and democratic functioning.
The treaty is bound to introduce strict rules for AI systems that might be at odds with human rights, including the much-discussed biometric recognition technologies.
“We need to look at the work of these organisations as complementary,” said Gregor Strojin, chair of the ad hoc committee that oversees the work in the Council of Europe.
Strojin also emphasised that the CoE is relying on the work of and input from other international organisations, such as UNESCO, the OECD and especially the EU, all of which have made the regulation and standardisation of AI one of their top priorities. He insisted that such cooperation should continue also in the implementation phase.
It is key that all the organisations that advocate for a human-centric approach to AI are “talking to each other”, said Michael O’Flaherty, director of the Vienna-based EU Fundamental Rights Agency (FRA).
While Commissioner Breton said none of the approaches of OECD, the Council of Europe and UNESCO are “as advanced as what we are doing here in Europe”, he stressed that the European and international approach goes hand in hand to ensure that the regulation of AI “reflect our European values.”
As a result, the AI treaty might provide a vehicle for the EU to establish its human-centric approach at the international level. The treaty might be legally binding for all 47 members of the Council of Europe and is being drafted in close collaboration with the United States, Japan, Mexico and Canada, all of which have observer status.
The official negotiations on the treaty are expected to start in May 2022.
*first published in: www.euractiv.com