by Julia Tar
The European Commission requested information on Thursday (14 March) from nine big tech platforms on their use of targeted ads and generative artificial intelligence (AI) to gauge compliance with the Digital Services Act (DSA).
The Commission asked Bing, Google Search, Facebook, Instagram, Snapchat, TikTok, YouTube and X to provide details regarding their strategies to address risks associated with generative AI, including on elections. It also asked LinkedIn to clarify how it complies with a prohibition not to target ads based on sensitive personal data.
The DSA is “now running at full speed”, Commissioner for Internal Market Thierry Breton posted on X on Thursday.
The DSA, which came into effect on 17 February, mandates specific content moderation measures for platforms with over 10% of the EU population as monthly users. That’s roughly 45 million people. These platforms are deemed to pose a systemic risk to society, so they have to adhere to content moderation protocols, including transparency and risk management obligations.
The EU executive last year released a list of such very large online platforms (VLOPs) and very large search engines (VLOSEs), and has updated it since. All nine platforms under the microscope as of today are on that list.
Targeted ads on LinkedIn
As a VLOP, LinkedIn must ensure users can easily discern details about advertisements and prohibit the use of personal data, such as sexual orientation, political beliefs, or race, for targeted advertising purposes.
The Commission’s investigation into LinkedIn is partly due to civil society organisations, including EDRi, Global Witness, Gesellschaft fur Freiheitsrechte, and Bits of Freedom, lodging a complaint with the Commission on 26 February. The organisations expressed concerns over LinkedIn’s possible violation of ad targeting restrictions outlined in the DSA.
A LinkedIn spokesperson told Euractiv that the company is compliant with the DSA, including provisions on targeted ads. “We look forward to cooperating with the Commission on this matter,” the spokesperson added.
Other platforms
The Commission is requesting details and documents from the other eight platforms and search engines on risk assessments and mitigation measures concerning generative AI’s impact on electoral processes, illegal content dissemination, fundamental rights, gender-based violence, minors’ protection, mental well-being, personal data protection, consumer protection, and intellectual property.
The inquiries pertain to both the dissemination and creation of generative AI content.
Generative AI is highlighted as a risk in the Commission’s draft guidelines on electoral process integrity as well, aiming to furnish VLOPs and VLOSEs with best practices and mitigation measures, including those specific to generative AI-related risks.
A Commission representative said at a press briefing on Thursday that the Microsoft-led pledge to combat deceptive use of AI in 2024 elections, announced at the Munich Security Conference, “is welcome, but we in Europe the privilege of having a toolbox with the new rules, in particular with the DSA that we can fully use to ensure election integrity”.
A request for information is an investigative measure and does not predetermine any subsequent actions the Commission may undertake. However, fines can be imposed for providing inaccurate, incomplete, or deceptive information in response to such requests.
The companies have until 5 April to submit data about how generative AI might affect electoral processes. Replies on other matters can be submitted by April 26.
Separately, the Commission also launched an investigation on whether the online commerce site AliExpress violated the DSA, including for its use of recommendation algorithms.
*first published in: Euractiv.com