by Collin Kurre, David Sullivan and Minos Bantourakis*
Digital services propel economic growth and innovation worldwide, and play a critical role empowering individuals to enjoy their human rights. But these services also generate risks to people, communities and societies from harmful online content and conduct.
With near-daily technological advances already having an impact on rights and safety, effective risk management offers a path forward for technology companies, government regulators and other stakeholders to anticipate the ways in which people could be exposed to risks and take measures to mitigate adverse impacts. In light of this shared objective, the World Economic Forum’s Global Coalition for Digital Safety convened a diverse group of experts to distill the essential elements of a cross-jurisdictional framework for understanding and addressing digital risks.
Though digital safety risk assessment is a nascent and evolving discipline, we are not starting from zero. Organizations are already using a range of frameworks and methodologies already exist to help manage complex risks that overlap with digital safety. For example, the United Nations Guiding Principles on Business and Human Rights (UNGPs) sets out a due diligence process that enable companies to avoid, prevent, and mitigate impacts to internationally recognized human rights. Another example is company enterprise risk management processes. These are typically focused on risks to a company’s business interests rather than broader digital safety objectives, but they can similarly serve as a useful tool or starting point.
Until recently, risk assessments related to digital safety have been voluntary efforts by companies, industry bodies and multistakeholder initiatives. That is now changing, with a growing number of enacted or proposed regulatory regimes that include provisions around risk assessments. These can be broad, such as the systemic risk assessments under the EU’s Digital Services Act, or focus on specific rights (e.g. data protection impact assessments), technologies (e.g. the EU Artificial Intelligence Act) or vulnerable groups (e.g. the UK Age-Appropriate Design Code). Requirements around dedicated online safety risk assessments also appear in the Australian Online Safety Act, Singaporean Online Safety Bill and the proposed UK Online Safety Bill, among others.
While it is the role of policy-makers and regulators to shape specific legally binding requirements for risk assessment compliance under these regimes, there remains value in global, multistakeholder approaches to explore collective challenges and potential solutions. Our coalition has benefitted from the diverse perspectives and expert insights of contributing members, learning from existing efforts and reconciling operational challenges to develop a high-level framework for understanding and assessing digital safety risks.
The resulting framework proposes a holistic approach that conceptually links risks – the potential for adverse impacts – and realized harms in a cyclical process. This framework can be used by organizations of different sizes, scales and maturity levels, offering diverse services, facing all kinds of online harms. This holistic approach is intended to allow companies and stakeholders to adopt a more consistent approach to digital risk assessment while encouraging actors to assess and address safety risks in the round, encompassing the potential harm to both users and non-users and the impact across different human rights including safety, access to information, freedom of expression and privacy.
We have also gathered case studies that reflect lessons learned from a wide range of existing practices and overviews of how existing frameworks have been designed and implemented. Examples include principles for due diligence when dealing with government restrictions on freedom of expression and privacy and assessment tools for start-ups to incorporate Safety by Design in online products. Some of the case studies look at frameworks, like the Digital Trust & Safety Partnership’s best practices, while others apply a risk assessment approach to a specific use case, such as child safety in immersive gaming, or algorithmic impact assessment for a search engine.
They show how risk assessment can be applied in practice to a specific technology, services and harms. They are intended to provide helpful examples for all stakeholders engaged in online safety, including online service providers, safety tech and risk intelligence players, content moderation and service providers, the public sector (governments, regulators, international organizations), civil society (NGOs, educators, youth) and investors (VCs, start-ups, founders).
We hope that this framework and accompanying case studies will be a meaningful contribution to global discussions around effective risk management and help raise the bar for digital safety around the world.
*Technology Policy Principal, UK Office of Communications (Ofcom) and Executive Director, Digital Trust & Safety Partnership and Head of Media, Entertainment and Sport Industry, World Economic Forum Geneva
**first published in: Weforum.org