by N. Craig Smith and Leena Lankoski*
When a virus brought the world to its knees, the digital economy got a shot of adrenaline. Remote working, e-commerce and distance learning boomed. Organisations scrambled to acquire the technologies that enable virtual meetings and collaboration among employees, clients and other stakeholders. Governments and scientists tapped into artificial intelligence and shared data to come up with responses and solutions.
In short, Covid-19 deepened our collective dependency on digital technologies. We are well in the throes of the Fourth Industrial Revolution – what worked or held true in the analogue economy may no longer apply. In fact, the basic foundations and assumptions of certain fields may require re-examination, and one of them is corporate responsibility (CR).
While scholars have delved into specific ethical issues in the digital economy, research so far has not addressed the question of how corporate responsibility as a field, including its basic foundations and assumptions, may be affected by digitalisation. We have undertaken a systematic examination of CR and the digital economy from this big-picture view.
Our analysis, detailed in the paper Corporate Responsibility Meets the Digital Economy, may help shine a light on how the digitalising economy shapes corporate responsibility and suggest shifts in managerial thinking.
Creepy targeted ads and other phenomena
We highlight five aspects of the digital economy that are particularly relevant to corporate responsibility: digital marketing, algorithmic management, autonomous processes in products and services, the sharing economy, and enhanced transparency and stakeholder governance.
Digital marketing
By now most of us are accustomed to seeing ads for products or services popping up on our computer or mobile phone screens similar to our latest Google search. This is the realm of digital marketing, in which firms use big data about consumers’ online activities, location and even mood (accessed by facial coding) to promote their products, taking invasion of privacy to whole new levels.
Thanks to algorithms, digital marketing can target individuals through tweaking prices as well as the timing, content and form of pitches. Clearly, big data creates an information asymmetry – unthinkable in the old economy – that enables firms to potentially entice consumers into buying products and services they don’t need or want, often at higher prices.
On the flip side, digital marketing could foster better understanding of consumer needs and lower prices for some consumers. If companies proactively inform customers of their data collection policies and seek consent to their practices, they could at least alleviate some misgivings about privacy and autonomy.
Algorithmic management
Within and among firms, algorithms and data have also brought about sweeping changes. Employee behaviour and performance and supply chains alike can be tracked and monitored. This enables firms to coordinate work tasks and optimise organisational structures. Takeaway delivery company Deliveroo, for example, systematically monitors its couriers (the time taken to accept orders, travel time to restaurant and to customer, amount of time at customers, late orders, unassigned orders, etc.) and compares their performance indicators against a benchmark. This has been described as “Taylorism on steroids”.
Algorithmic management, if it can avoid bias, may ensure that employees are appraised fairly without favour. But algorithms may not be able to assuage ethical concerns over freedom, privacy and respect. Deliveroo’s recent IPO was a flop in part due to investors’ unease about working conditions of the company’s couriers as well as potential regulatory changes that could affect how gig economy companies treat their workers. For managers, the challenge lies in balancing efficiency and control with employee rights and morale.
AI and autonomous processes
While humans are generally constrained by – to a greater or lesser extent – ethical and social considerations, machines are not similarly bound unless they are programmed or taught to incorporate human ethical reasoning. Even that has its limits as machines do not reason and then choose how to behave. They do not have intentionality of their own making. In short, they lack the essential requirements of moral agency.
Another concern about AI relates to the possibility that machines might be too similar to humans: If algorithms learn by imitating human behaviour, they will perpetuate the same biases as well as the same unethical habits and behaviours. What’s equally unsettling is that as AI develops, its internal workings and the resulting outcomes may become unfathomable to even experts. Apple’s “sexist” credit card that assigned women lower credit limits than men highlights how human oversight often remains essential.
Much of the concern regarding AI centres on humans losing jobs to machines. As robots become increasingly sophisticated, we are left to contemplate the science-fiction scenario of AI surpassing human intelligence and taking control of itself and mankind – what is sometimes described as technological singularity.
Sharing economy
Whether it’s getting a ride, hiring a dog-sitter or renting a holiday home, the so-called sharing economy has got you covered. Uber (transportation), Airbnb (accommodation), and Amazon Mechanical Turk (tasks) are but a few of the iconic digital platform companies connecting product or service providers with consumers and facilitating peer-to-peer transactions.
While the sharing of resources may benefit the environment and create employment, the sharing economy has created its own ethical conundrum, notably over accountability and assignment of responsibility. Should platform-providers be held accountable for the actions of individuals, such as drivers and landlords as well as the people who use their services and offerings? Should they be held to the same regulations as traditional players, such as taxi companies and hotels? Equally concerning, should gig workers be considered employees, with the attendant benefits and protection, or self-employed independent contractors?
Still another concern is that consumer-sourced rating systems might be discriminatory against certain groups. While companies are prevented from engaging in workplace discrimination by law, individuals are not subject to the same standards.
Transparency and stakeholder governance
Big data, including real-time data about complex supply chains, and technologies like blockchain enable firms to communicate information to stakeholders with greater transparency and traceability. On the other hand, digital media empowers stakeholders such as activist networks or the general public to organise and communicate more effectively in their efforts to influence and pressure organisations about corporate responsibility.
All this potentially contributes to stronger engagement with stakeholders by firms and stronger governance of firms by stakeholders. Case in point: Wall Street hedge funds, often reviled for ruining struggling companies by artificially pushing down their share prices, suffered heavy losses this year after their bet against ailing video games retailer GameStop was met with a coordinated fightback by users of online forum Reddit.
Impacts on the CR field
These five phenomena can affect the foundations of CR in three ways. First, they can change the answer to the question: Responsible for what? Digitalisation can make existing CR issues manifest in novel ways (e.g. consumer privacy in digital environments; job loss to robots; working conditions in a gig economy; transparency of AI internal workings).
It can also intensify existing issues, as we have seen with digital marketing and consumer autonomy, or with algorithmic management and employee treatment. But digitalisation can also help to resolve existing issues when it improves transparency, or when new solutions to grand challenges in domains such as health or energy can be found, thanks to AI and the power of big data.
Second, digitalisation can impinge on the question: Responsible to whom? It can affect stakeholder salience, which determines the priority accorded to different stakeholders by managers. Consider, for example, which stakeholders take precedence in the sharing economy. For Airbnb, beyond obligations to consumers and landlords, what are its obligations to the landlord’s neighbours, to competing hotels, and to the local government? More speculatively, digitalisation could even give rise to new stakeholders, if bots and robots were to be granted some kind of stakeholder status in the future.
Third, digitalisation has also brought new urgency to the question: Who’s responsible? With blurring boundaries and roles between market actors, and with non-human actors taking decisions with moral consequences, the answer to this question becomes less evident.
Digital transformation is disrupting business in fundamental ways. Managers need to be digitally literate and understand, more specifically, how digitalisation reshapes the CR landscape. This is important, for example, in negotiating appropriate contracts with suppliers and clients as well as in addressing their potential ethical concerns and even resentment. More broadly, digitalisation has profound implications for firms in meeting their multiple obligations to many different stakeholders. The framework that our paper offers can help managers to better navigate the world of CR in a digital economy.
*INSEAD Chaired Professor of Ethics - Social Responsibility and Senior University Lecturer, Aalto University School of Business & INSEAD Visiting Scholar
**first published in: knowledge.insead.edu