by Axel Voss*
The European AI Office needs to grow to over 200 staff members implementing the rules for AI next year, otherwise the Commission’s lack of ambition puts citizens and businesses at risk.
In half a year, common EU rules start to apply to the most advanced AI models, crucial to supporting downstream industry and protecting citizens against systemic risks.
The AI Office in the European Commission, however, is massively understaffed for the task, counting so far only around 85 staffers – out of which only around 30 team members are specifically tasked with the implementation of the AI Act.
In contrast, the UK seems to have grown their AI Safety Institute to over 150 staff, focused only on safeguarding advanced AI. One single country already has five times the capacity of the EU for AI oversight and does not even have a law to implement.
As other countries make it a leadership priority to navigate the world’s transition to ever more powerful AI, the Commission fails to live up to its promise to member states and the European Parliament.
Let us go back to December 2023, when the inter-institutional negotiations were concluded. The EU decided to react swiftly to early signs of a disruptive technology with potentially great benefits and harms by introducing rules for general-purpose AI models, the technology underlying ChatGPT & Co.
A key element of the political agreement between parliamentarians and the 27 member states was to hand the Commission the necessary enforcement powers, centralising AI expertise across the EU in a strong AI Office. While it was established, it does not seem up for the task yet.
Unprecedented five years ahead
Europe’s new tech Commissioner, Henna Virkkunen, recognised that the next five years will be challenging when it comes to AI, as the technology is developing very fast, calling out that Europe is all too dependent on third countries.
Indeed, fuelled by billion-dollar investments and strategic partnerships between only a few, the wealthiest companies outside of the EU are locked in a race. They push the boundaries of general AI to match or exceed human abilities in most cognitive work.
Impressed by early advancements, millions of end-users use foreign general-purpose AI models and Europe’s businesses are integrating these across industries.
Europe bears the cost of these dynamics. Various general-purpose AI models are not sufficiently safe. A lack of transparency puts the burden on downstream EU companies trying to responsibly build upon black-box models. A lack of guardrails poses systemic risks to the entire EU.
Some of the most renowned experts are constantly warning against severe harm, including lower barriers for biological weapons development, losing control over autonomous AI, major accidents in critical sectors, or harmful discrimination and disinformation at scale.
In need of serious ambitions
One would assume that the Commission is aware of the immense capacity needed to navigate Europe through unprecedented AI advancements. But the reality is a lack of ambition.
While the AI Office is being credited for setting a global standard for safe AI innovation by detailing out rules in the first General-Purpose AI Code of Practice, only a small team is dedicated to the process.
Delays are also accumulating for other deliverables. Stakeholders were waiting for months for a template outlining how AI training content should be publicly reported. Related to specific AI use cases posing unacceptable risks, the Commission launched a public consultation only three months before those rules apply.
Eighty additional staff were promised in December 2023 for the AI Act’s implementation. While still insufficient, it would have been a start.
Yet, the staff count seems to be spread across the AI Office, covering not only the AI Act but also international coordination, AI for societal good, and so on – important tasks that merit their own dedicated resources.
Only last week, the AI Office published the first hiring call for legal and policy talent, following the tech specialist call from March. It remains unclear how many posts will be filled.
Commission leadership at the highest level should be serious about protecting EU citizens against severe harm and curating a single market where we can trust what AI has to offer. The AI Office should be equipped not only with legal enforcement powers but also with all resources needed to effectively implement rules.
Commissioner Henna Virkkunen promised that the Digital Services Act enforcement team should reach 200 staff members by the end of next year.
The Commission should demonstrate the same ambition for the much-praised landmark rules for AI. By the end of next year, the AI Office Units A2 and A3 should have over 200 staff.
Most capacity will be needed to govern the most advanced models. After all, expertise across all systemic risks, market developments and legal dimensions is needed while moving at the same pace as the technology.
*MEP since 2009, a shadow rapporteur for the AI Act and rapporteur for the AI Liability Directive last term, he is currently a member of the committee of legal affairs (JURI) and a substitute member of the internal market and consumer protection committee (ITRE) in the European Parliament.
**first published in: Euractiv.com