Edition: International | Greek
MENU

Home » Analyses

AI rules must help increase public trust

EU regulation on artificial intelligence must be designed to increase user trust, while unlocking innovation

By: EBR - Posted: Wednesday, April 21, 2021

The global conversation around artificial intelligence is largely dominated by opinions on how governments should eliminate all possible risks associated with this disruptive technology.
The global conversation around artificial intelligence is largely dominated by opinions on how governments should eliminate all possible risks associated with this disruptive technology.

by Guido Lobrano*

EU regulation on artificial intelligence must be designed to increase user trust, while unlocking innovation, writes Guido Lobrano.

The global conversation around artificial intelligence is largely dominated by opinions on how governments should eliminate all possible risks associated with this disruptive technology.

Of course, this is impossible to the extent it is impossible to live in a zero-risk society, but most importantly this approach also bears its own risk – that of underestimating the downsides of reducing, slowing down or making it more difficult to adopt AI technology that would largely benefit European citizens.

Against this background and amid competition between the US and China, the European Union is nurturing its own global ambitions, and the European Commission will shortly set out trendsetting plans to regulate AI’s use.

The Commission itself recognised AI’s wide remit, including in terms of impact and risk in last year’s White Paper – with a focus on high-risk AI which is largely expected to be reiterated in the upcoming regulation. This is certainly the right move – one that many legislators around the world will hopefully follow.

However, while there is wide acceptance of the criteria that in principle account for high-risk AI uses, context is key here: the EU’s “human centric” approach promotes the ethical and responsible deployment of AI, but it is crucial that the new EU proposals do not translate the risk criteria into blanket classifications of various AI uses as high risk.

This would disproportionately expand obligations meant to be the exception to uses that actually bear little risk for fundamental rights. Also, many uses are already subject to sectoral regulation, changing from country to country.

The global dimension is just one of the many challenges surrounding regulating a complex issue such as AI, requiring in turn a global conversation and response.

While the EU may well be the first to regulate AI, others will likely follow suit. Any new rules on AI need to be considered in a global context, as bodies as diverse as the G7 and G20, the World Economic Forum and OECD as well as UNESCO, have underlined in recent years.

After all, technology is developed globally through complex supply chains, making global convergence and alignment a strong enabler for innovation.

Benefits for all

Europe already acknowledges the benefits that AI brings: not just making our lives easier but helping to solve societal challenges such as treating chronic diseases or tackling climate change.

The coronavirus pandemic renders this role even more critical and AI is helping to provide innovative solutions to health issues but also ways out of the economic crisis caused by COVID-19.

This is key. If the EU were to take blanket regulatory measures to manage AI, it may hamper the innovative benefits it can bring in fields as diverse as not only healthcare but agriculture and construction, personal mobility and financial services. At the same time, we have to recognise the novel challenges emerging with rapidly developing technology such as AI.

The crucial task for policymakers is thus to manage these potential risks while at the same time enabling steady AI development and uptake. What matters here is a nuanced, flexible but effective approach to regulation, including some scope for self-regulation via. agreed benchmarks, and for mechanisms for facilitating regulatory compatibility across borders, including through mutual recognition.

Strong safeguards for fundamental rights and upholding EU values must be anchored in a framework that encourages innovation and enables responsible rollout of the technology globally. That’s why the EU’s stated goal to regulate only ‘high-risk” AI applications is welcome.

Public good

If AI is to be a tool for genuine public good, then the public must understand it. As well as promoting transparency and explainability, national governments and international bodies like the EU should be investing in skills. Not so we know the minutiae of how every AI application works but so we can trust, based on evidence, that its impact is positive.

*vice-president for Europe for the Information Technology Industry Council
**first published in: www.euractiv.com

READ ALSO

EU Actually

Border controls are the new normal in the Schengen area

N. Peter KramerBy: N. Peter Kramer

Prime Minister, Michel Barnier, announced that France will control all its borders for illegal immigration from November 1

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

Roberta Metsola: The US ‘understands the language of power’

Roberta Metsola: The US ‘understands the language of power’

The European Union should remain united and speak with one voice because the US understands the language of power, Roberta Metsola said in an interview

Business

Value-based trade policies are on the rise- Here’s what businesses need to know

Value-based trade policies are on the rise- Here’s what businesses need to know

Trade policy is no longer just there to promote efficiency and productivity in the flow of goods and services

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron