by Ben Wagner, Ioanna Noula and Velislava Hillman*
As education becomes increasingly reliant on technology, the need for regulation and oversight of the EdTech industry becomes more urgent and Europe should lead in the efforts to protect children’s data and futures, write Velislava Hillman, Ioanna Noula, and Ben Wagner.
The education sector is being transformed. At the same time, it is becoming increasingly dependent on advanced digital products and services. The pandemic response has accelerated and exemplified this dependency with education all around the world going online overnight.
Little-understood digital education platforms and digital services supplied by private businesses are being adopted, with minimal consideration for the long-term impact on children, education and society.
Despite the crucial role of EdTech services and the reliance on the private sector for their supply, there is little oversight or dedicated regulation to protect children’s data, which may result in significant long-term risks for society at large.
The business model of EdTech providers depends on harvesting vast quantities of granular data generated and collected every day, enabling constant ‘dataveillance’, behavioural control and, ultimately, total loss of privacy.
Advanced EdTech systems automate services from curriculum design, through admission decisions to course scheduling to assessment. The impact of these processes extends to the employment market, health and social services. If left unchecked, these systems could lead to automated discrimination and human rights abuses from the cradle to the grave.
Children’s ‘right to future tense’
Within this new educational ecosystem, the majority of end-users cannot consent and yet, there is little attention to the regulation of the EdTech industry in recent pieces of European legislation.
This needs to change swiftly to safeguard children’s rights and wellbeing. Digital systems are being pushed in, institutionalised and legitimised as frameworks for everyday decision making across the spectrum of human activity. Who gets the job, who can take out a loan, who falls under what kind of pension scheme may all be decided by an algorithm going back to your childhood data – over which you have little control.
These systems may enable inclusivity and automate various arduous processes, but the reliance of education on digital services will, if left unchecked, lead to “automating inequality”.
Education data collected through networked devices and applications follow students for the rest of their lives, yet, the EU Digital Services Act (DSA) omits any mention of children or education.
If Europe is to maintain its global leadership in regulation, then it should consider its legislative packages in a way that they prioritise human choice (i.e. children’s right to opt out of data collecting services while remaining able to take an assessment).
The EU’s Artificial Intelligence Act (AIA) is an opportunity for improving the deployment standards of EdTech. The AIA sets red lines on the usage of AI and mentions EdTech as a high-risk user of AI but current requirements are far more limited than those included in a leaked AIA-draft from January 2021 which went much further.
The current AIA draft does not set requirements for independent audits which would deliver the accountability of EdTech. Existing provisions of the AIA are primarily focused on transparency requirements for high-risk AI. Yet, experience with social media platforms shows the failure of this approach.
Not only transparency reports are mostly voluntary but also there is no consistency or rationale provided about the data being shared. The reports are usually not communicated in ways that make them comprehensible to the general public and there is no way to verify their accuracy. External oversight and greater accountability for EdTech are urgently needed.
Defending the citizen
Of course, the responsibility for regulating EdTech does not solely lie with the EU. National governments also have a considerable role to play. In particular, considerable improvements are urgently needed in the way EdTech is procured by public authorities.
An interesting example here is the procurement rules in Scotland where it is data controllers and other local authorities, not the central government, who set the rules and do not allow for student assessment data to be used to create league tables or compare schools for a different purpose than the one the data were originally collected for.
Accountable mechanisms will enable society to understand the values and norms driving EdTech businesses; it will help safeguard children’s best interests and evidence real opt-out options and choices for young people.
Appointing a national commissioner or EU High Level Group or other similar institution to ensure that children’s rights are systematically considered in all legislative measures would be a powerful step in the right direction.
Responsibility for children’s rights cannot be siloed into and fragmented across individual legislative packages. A holistic approach to children’s rights in EdTech at both EU and national levels is needed, with the key challenges addressed in each legislative package.
With children being the end-users of EdTech it is imperative to produce regulatory frameworks that will protect and empower them against the backdrop of a rapid blackboxed transformation.
We must ensure that children’s data do not become a tradable commodity. In the end, it is society that will bear the burden of inattention today.
*Assistant Professor at the Delft University of Technology, a visiting researcher at the Human Centred Computing Group, University of Oxford and head of research - development & co-founder of the Internet Commission and visiting fellow in education technologies, children - youth at the London School of Economics& Political Science
**first published in: www.euractiv.com