by Brian Green and Daniel Lim*
Despite their best intentions, there’s often a gap in businesses between the desire to act ethically and following through on those good intentions.
We call this the intention-action gap. Closing this gap is key to ensuring technology is developed in a more responsible, inclusive manner.
To help close the intention-action gap, the World Economic Forum recently embarked on a project to unearth tools, processes, and lessons from organizations that have made progress in operationalizing ethics in technology. This project, studying Microsoft’s journey to operationalize its responsible innovation practices among its more than 145,000 employees, considered the evolution of the company’s culture, the tools and processes it created, and the efficacy these of efforts.
Insights from this research were collected in a report titled Responsible Use of Technology: The Microsoft Case Study jointly authored by the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University. The top four lessons from this research are summarized below.
1. Responsible innovation begins with culture change.
When Satya Nadella became the CEO of Microsoft in 2014, he brought the "Growth Mindset" approach with him. This approach, developed by researcher Carol Dweck, considers how thinking influences human endeavours. With this mindset came an environment for introspection, innovation, and learning that allowed the company’s culture to more deeply consider the impacts of its technology on society.
This approach was put into practice after a tech and PR crisis. In 2016, Microsoft released an AI-powered chatbot on Twitter called "Tay," which was maliciously attacked to respond with inappropriate and denigrating comments. While another company may have sought blame or redirected the responsibility for this incident, Microsoft issued a public apology. The organization and its employees saw an opportunity to grow from this experience and operationalize ethics in AI at many company levels.
This effort led to the development of Microsoft’s Responsible AI Principles, a set of foundational guideposts established in 2018. Microsoft operationalized these principles across the company by creating a governance structure borrowing from the hub-and-spoke model proven successful at integrating privacy, security and accessibility into its products in the past.
That hub includes three internal teams. They include:
-AI, Ethics, and Effects in Engineering and Research (AETHER) Committee - providing scientific and engineering advice and expertise on the enactment of the responsible AI principles;
-The Office of Responsible AI (ORA) - tasked with the policy governance, sensitive use and education functions; and
-The Responsible AI Strategy in Engineering (RAISE) group - enabling engineers to implement responsible AI tools and systems.
Serving as spokes in this governance model are the Responsible AI Champs. These Champs, embedded throughout the company, are domain experts who advise, assist, and raise awareness on Microsoft’s responsible AI approach.
Additionally, Microsoft also created the Responsible AI Standard, which outlines a set of steps that teams at the company must follow to support the design and development of responsible AI systems. These efforts provide an example of the sort of thoughtfully-designed governance model that must be in place to support culture change and to operationalize ethics in technology.
2. Tools and techniques ease implementation.
In any change management initiative, tools make it easier for individuals to modify their behaviour. This hold true for operationalizing ethics in technology as well.
As professions such as data science and machine learning (ML) are still in their infancy, tools make it easier for practitioners to explain data science models’ performance, improve the fairness of algorithms, and identify ethical issues critical to operationalizing responsible AI.
Microsoft’s technical tools such as InterpretML, Fairlearn, and Error Analysis represent significant contributions in this area. Specifically, InterpretML makes it easier for practitioners to better interpret and explain behaviours in a ML model. Meanwhile, Fairlearn helps data scientists assess and improve the fairness of machine learning models. Lastly, Error Analysis makes it easier for data scientists to identify cohorts with higher error rates than benchmarks and improve accuracy in their models. Each of these tools provide users with dashboards that make it easier visualize model performance.
However, our research found that tools that help teams to construe their work ethically are also key to helping to change behaviours. For Microsoft, these come in the form of Impact Assessments, the Envision AI Workshop, the Judgment Call game, and Community Jury. These tools help teams consider their products’ potential consequences using checklists, role-playing exercises, and stakeholder engagement, respectively. With these tools in place, it’s easier for product teams to build greater empathy for stakeholders affected by their technology who are often underrepresented in the technology industry.
3. Alignment and measuring impact create accountability.
Many companies evaluate employee performance by how those staffers uphold their company’s values. For any company looking to operationalize ethics in technology, they must extend this practice to evaluating and measuring employee performance against the firm’s ethical technology principles.
Aligning Microsoft’s thousands of employees on responsible intelligence principles began with the Cognition team, a group that works on products enabled by augmented reality/virtual reality, computer vision and artificial intelligence technologies.
Cognition team members met with their managers during their bi-annual goal setting and performance evaluation exercise to create a shared goal to implement responsible AI personalized to their role in the organization. This system, now being rolled out to other teams, enables Microsoft to create alignment and accountability around its responsible AI efforts while measuring each employee’s impact.
4. Responsible products are better products.
Through responsible innovation, ‘better’ products can be developed – products of higher quality and that are inclusive and better poised to help the world.
To create these more accountable products, Microsoft’s Principles for AI create top-level expectations. This toolkit helps set the path to implementation and the development of the products themselves. It also helps empower engineers to get into the details of responsible product development through various ML tools, workshops, and prompt conversations.
The impacts of these efforts are already visible in the features of some Microsoft products, such as spatial analysis, speech consent, and Custom Neural Voice, which emphasise privacy, meaningful consent, and transparency, respectively. Without these value commitments embedded into concrete design processes – crossing the intention-action gap – these products might have been designed in a less responsible way.
These products are ultimately rooted in practical ethical thinking and action. In our research, when we compared Microsoft’s implementation of responsible AI to the Best Ethical Practices in Technology published in 2018 by the Markkula Center for Applied Ethics, we discovered that Microsoft has implemented or has begun to implement most of these practices. For example, Microsoft’s AI Principles directly connect to six of the collected best practices.
Looking ahead
These lessons show how one company operationalized responsible innovation. These principles, governance model, and tools can be adapted and applied to other technologies and organizations.
To be sure, change won’t happen overnight. A sea change is needed in corporate culture towards responsible innovation and ethical action. But we need this sea change not only in corporate culture, but in all of culture and society, as we learn how to use technologies for their best purposes. Recognizing that these problems exist is the first step to exploring how they can be solved.
As more sectors of our society digitally transform, more organizations will need to go through their journey to operationalize technology ethics. If corporations do not make these efforts to improve their cultures, processes, and products then we will not only fail to achieve the promise of technology to make a brighter future for all, but experience effects from technology that damage society and harm individuals.
The above efforts helped Microsoft to actualize their ideas and close the intention-action gap. They are now sharing these lessons in the hopes that others might find them useful and also share their own lessons from their own experiences. If more companies are willing to share their own experiences it will not only allow for mutual benefit, but also help to raise expectations of ethical thinking across more sectors, and hopefully lead to greater benefits across the globe.
*Director, Technology Ethics, Santa Clara University and Artificial Intelligence and Machine Learning, World Economic Forum
**first published in: www.weforum.org