by Michael Schmidt*
If there’s one thing we know for certain when looking at the year ahead, it’s that the organizations that are prepared to take on uncertainty – from market conditions to geopolitical unrest and everything in between – will be the ones best suited to serve their customers, employees, and shareholders.
The artificial intelligence (AI) field has seen incredible growth in the last five years because it has provided new capabilities to mitigate uncertainty by leveraging data to rapidly respond to changing environments as quickly as new data comes in.
The technology and its benefits are no longer a great unknown to the majority, instead, many have seen first hand the ability AI has to work quickly and efficiently in solving many of society’s most pressing challenges. We’ve seen it play a role in the record speed at which COVID-19 vaccines were delivered, help hospitals identify and treat their most at-risk patients, and more broadly, vastly reduce the number of human errors in data.
As we look to the year ahead, we think the ramifications of heightened societal awareness of AI, increased regulatory pressure, the increased momentum of investments in the space, and how AI will continue to increase employee productivity may come to a head. Practical and applied AI concerns will become paramount to enable continued value from AI growth.
1. Heightened awareness and ethical concerns
Algorithmic bias has been a growing subject of discussion and debate in the use of AI. It is a difficult topic to navigate, both due to the potential complexity of mathematically identifying, analysing, and mitigating the presence of bias in the data and because of the social implications of determining what it means to be “fair” in decision-making.
Fairness is situationally dependent, in addition to being a reflection of values, ethics, and legal regulations. That said, there are clear ways to approach questions of AI fairness by using data and models with guardrails in place, as well as suggested steps organizations can take to mitigate issues of uncovered bias.
The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Ultimately, machine learning gains knowledge from data, but that data comes from us – our decisions and systems.
Because of the expanding use of the technology and society’s heightened awareness of AI, you can expect to see organizations auditing their systems and local governments working to ensure AI bias does not negatively impact their residents. In New York City for example, a new law will go into effect in 2023 penalizing organizations that have AI bias in their hiring tools.
2. Increased regulatory pressure
In the year ahead, I expect companies to face increased regulatory pressure around their AI models. Regulatory changes are likely to include requirements around both explanations for individual predictions as well as detailed records and tracking of the history and lineage of how models were trained.
Increased AI regulation will ultimately be welcomed by the industry as evidenced by 81% of tech leaders saying they would like to see increased government regulation in a recent DataRobot survey. However, the recent Blueprint for an AI Bill of Rights, which provides a set of five principles and associated practices to protect the rights of the American public in the age of AI, has prompted companies into action. More companies are now cognizant of having to react to the potential conversion of voluntary guidelines into regulations in regulated industries and the potential costs of reactively achieving compliance in a short period of time.
Because of this, I predict most companies will need to invest in systems with model governance in place. By investing in systems that have the appropriate guardrails, companies can continue to focus on technological innovation with the peace of mind that their systems comply with legal and regulatory obligations.
3. Further investments in the space
In 2023, I expect to see continued momentum in AI investments, particularly among businesses most directly impacted by economic and supply chain disruptions, as well as mature industries generally able to scale AI adoption the most, such as financial services, retail, healthcare, and manufacturing. However, I also predict that, while some investments will progress, some AI technology trends will continue to be experimental.
Looking at financial services for example, I expect that use-cases will turn to AI systems that can improve accuracy of fraud detection and speed up laborious reporting processes. With rising expectations and an onslaught of security breaches, financial services need to secure a competitive advantage with AI technologies that can help mitigate these detrimental issues. Additionally, AI will help to improve job satisfaction and free up employees to focus on adding customer value.
Looking at technology trends, generative AI is receiving tremendous interest based on newly developed deep learning models (from OpenAI and others). However, I predict these models are still too new to be practical for most enterprises because of a few challenges. The first being the fact that it’s difficult to ensure their behaviour on necessary issues like bias and fairness; despite efforts, current versions can be easy to break. This means businesses will need to truly trust providers of these models since they will have no hope to build or create their own.
Adapting these models for desired use-cases is also difficult for most to get right. While I expect to see companies continue to work with generative AI, I believe applications will continue to be experimental for many enterprises in the coming year until the business cases and their expected return on investment is better understood.
Overall however, businesses that focus on building an AI mentality across the organization by continuing to make investments in the space and fully integrating AI into their operations (including assessing new developments) will be better suited to handle market uncertainty and drive long-term success.
*Chief Technology Officer, DataRobot
**first published in: Weforum.org