by Karen Silverman and Navrina Singh*
Our pandemic response accelerated the incorporation of powerful AI tools and technologies that support new ways of working, hiring, managing, analysing and sourcing.
The rush to deploy these technologies carries a corresponding rush for new ways to govern them for quality and impact. This work on controls and oversight, however, is lagging in practice.
Delaying this governance work is delaying the inevitable; it represents missed strategic opportunities and exposes customers, employees and society to harms associated with unintended bias, opaque decision-making and errors.
Many reasons have been put forward for deferring a focus on oversight, but they do not really hold up. AI is in use today and there are already good, solid, future-proofed steps we can take right now – good investments – in this area that can mitigate risks and unlock robust strategies.
Simply put, we can take steps today that set us on the right course even as the road ahead is still emerging.
The six most-cited reasons for not tackling AI governance
1. ’We don’t have to’
This is not entirely true. Plenty of existing rules and legislation already apply, including in the areas of credit reporting, privacy and restrictions on discrimination in employment and commerce. Broad AI regulation like the recent proposal from the EC for a European approach to the matter and technology-specific legislative proposals are on the horizon, as are industry-use and procurement standards.
2. ’It’s too soon’
AI and related technologies will always feel too soon. Agility will be part of design and oversight forever. AI capabilities will always evolve rapidly and its risks arise precisely because of that and because of how the technology generates predictions.
Even as it improves and people get better at using them, these risks remain inherent and a certainty. Though standards are shifting, parties are starting to draft contracts to contemplate and shift these risks and many proposed policies and frameworks already provide a good glimpse into the important issues.
According to the OECD, there are already over 300 AI policy initiatives from 60 countries, territories and the EU.
3. ’No one else is doing it’
Actually, awareness and attention to AI governance is becoming mainstream. Large organisations like Microsoft, Salesforce, Mastercard and Cisco, the US National Security Commission and US Department of Defense have published principles and policies outlining their approaches.
Others are coming quickly online too, including Workday and HR tech firms.
4. ’It’s someone else’s job.’
AI governance is for all of us. AI systems include the human teams that develop, instruct, use and are impacted by them. How we build and buy these tools requires us all to get smarter about their qualities and their risks, just as with any other product we bring into our work, homes and bodies.
All this is true from the most senior executive who sets the tone and standards for quality and trustworthiness and the consumer who demands the same.
5. ’We can’t afford it’
It’s more likely you can’t afford not to. AI governance is a strategic exercise that can unlock new ways to build new markets, more robust customer and employee relationships, deliver product more efficiently and expand engagement – all at scale and speed. The same is true of its risks.
AI governance is therefore also a risk mitigation exercise, as adverse impacts show up as risks to brand, customer trust and legal liability. Most companies do not have the staying power to pull their products off the market voluntarily in order to respond to concerns (for instance as Microsoft, Amazon or IBM did in connection with the use of facial recognition technologies by law enforcement).
6. ’We don’t know how’
We know more than we think we do. Everyone can bring what they know about their industries and organisations to apply in very new, specific AI-use contexts. The emergence of tools for the technical stakeholders to design, develop and deploy these AI systems with more accuracy and urgency is hopeful, but not enough.
AI systems need diverse stakeholders doing what they already know how to do, which is to examine and oversee strategic and technology systems from strategic, policy, compliance, risk, audit and other perspectives.
What we can do today
A route does exist for leadership to get started on this road and it is straight-forward, future-proofed, affordable and worth investing in now:
1. Know where AI is being used and developed or planned for implementations within your organisation.
2. Reinforce the strategic and long-term value of AI systems with a thoughtful approach to goals, limits, risk and impact that are all assessed upfront, as part of the design, and on an ongoing basis.
3. Establish senior-level structures for accountability and input. C-level leaders should oversee and manage AI governance and implementation, in order to set the tone, taking input from across the organisation.
4. Identify and prioritize discrete use cases so issues and multistakeholder teams can avoid overwhelm and instead focus on the technology’s specific capabilities and impact, and deploy an outcomes-based assessment, where it matters most.
5. Design – and sufficiently resource – a multi-stakeholder, cross-functional team based on expertise, insight and impact, pulling from different skillsets, generations and cultures. Within this team, build competencies relevant to the use of AI in your organisation, selecting a fit-for-purpose AI policy framework that defines and governs the process for releasing responsible, reliable technology.
Such policies should address both the human and technology meanings of high/low consequence, transparency, attention to unintended bias, explainability, interpretability and accountability, as these concepts apply in context.
6. Finally, for each selected priority use case, apply the ABC&D tool:
a) Articulate: ask about and assess the purposes, impacts, fitness and boundaries of AI use cases and tools;
b) Behaviorally align the team’s incentives to maximise desired governance outcomes;
c) Continuously monitor performance against articulated standards;
d) Document good processes, guidance and thoughtful training.
AI governance is a strategic opportunity whose time has come. The technology itself is not perfect, and nor are the humans, who for the most part are responsible for how those insights and powers are used, for better or worse.
The sooner we tackle governance the better: it’s not too soon, it’s all of our jobs and we do know where to start.
*Global AI Council Member, CEO- Founder, The Cantellus Group and Founder- Chief Executive Officer, Credo AI
**first published in: www.weforum.org