by Clara Bauer-Babef
Artificial Intelligence (AI) holds the potential to transform healthcare by enhancing various aspects of patient care, diagnostics, and administrative tasks. However, addressing ethics is crucial to ensure it benefits society without causing harm.
During an event in Brussels in January, the European Commission launched the European Cancer Imaging Initiative, one of the key projects of the European Cancer Control Plan (EBCP). The initiative aims to unleash the potential of AI or High-Performance Computing (HPC) to combat cancer.
“Digital technologies and artificial intelligence are key in our battle against cancer,” Thierry Breton, the commissioner for the internal market, said in a press release following the launch.
The initiative works towards creating a digital infrastructure linking up resources and databases of cancer-imaging data across the EU, including hospital networks and research repositories, aiming to foster innovation and deployment of digital technologies in cancer treatment and care.
However, the responsible development, deployment, and use of artificial intelligence is at the core of avoiding potential harm. The Commission’s initiative, just one of the examples of how AI is entering the health field, is said to ensure adherence to high ethics standards.
But according to the World Health Organisation (WHO), European countries are not ready to address AI’s risks.
“We need to work much harder to ensure that AI is fit for purpose so patients and health workers need to be far more involved in the design and the development of AI”, Natasha Azzopardi-Muscat, director for country health policies and systems at WHO Europe, told Euractiv at the European Health Forum in Gastein (EHFG) in September.
The WHO is currently working on a policy brief on the regulation of AI in health care. “We don’t want to stop the development of AI, but we need to make sure that it is safe,” said Azzopardi-Muscat.
Ethics in AI
One of the main concerns is the discrimination AI can generate. AI can be biased if the training data used to develop AI algorithms contains imbalances, reflects healthcare disparities, or mirrors existing biases in diagnoses and treatment decisions.
For example, if data predominantly represents a certain demographic group or underreports specific health conditions in others, the AI may provide inaccurate recommendations or diagnoses, leading to disparities in healthcare outcomes.
“In certain cases, AI has not been respecting equity and diversity. We have had, for example, algorithms that have not worked so well for women or for people from different races,” Azzopardi-Muscat pointed out.
In a study released in June 2022, the European Parliament shared the same concern.
“Lack of transparency is widely regarded as an important issue in the development and use of current AI tools in healthcare,” the study found.
On Thursday (28 September), Amnesty International pointed out that AI systems “use colossal volumes of often sensitive data, which are fed into automated systems to determine whether a particular individual is eligible for housing, social benefits, healthcare and education”.
“Yet instead of solving societal problems, many AI-based systems blatantly amplify racism and inequality, and perpetuate human rights abuses and discrimination”, Amnesty added, saying that the AI Act should go further to protect EU citizens.
Concerns about ethics and AI are nothing new. In 2021, UNESCO, the UN Educational, Scientific and Cultural Organisation, established a list of recommendations for the ethical use of artificial intelligence.
One of the recommendations was: “AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.”
Another common concern comes from the fact that some doctors fear they could be replaced by AI-driven devices. However, AI’s goal is not to replace doctors with machines but to train doctors to work with machines, Terje Peetso, chief medical innovation officer at the North Estonia Medical Centre, pointed out at the EHFG.
Azzopardi-Muscat stressed that “AI has the opportunity to really change the way we optimize health service delivery. The possibilities are infinite”.
EU AI Act
In April 2021, the European Commission presented the first EU regulatory framework for AI and two years later, the European Parliament adopted its negotiating position on the AI Act.
“The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects”, the statement from the Parliament said.
The AI Act considers AI systems used in products falling under the EU’s product safety legislation, including toys, aviation, cars, medical devices and lifts, to be of high risk.
Parliament also expanded the list to include bans on intrusive and discriminatory uses of AI, such as biometric categorisation systems using sensitive characteristics like gender or race.
In the health sector, this new rule could help fight against the discrimination Amnesty International and the WHO pointed out.
“We are confident and optimistic that AI will continue to improve itself. It has to be part of the solution, but we need to find the right way to regulate it so that patients and health workers can trust it,” Azzopardi-Muscat concluded.
Co-rapporteur Dragos Tudorache (Renew, Romania) said on the day of the vote that “the AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology […] evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law”.
As the talks between Parliament and EU countries will now begin in the Council, an agreement on the AI Act is expected by the end of 2023.
*first published in: Euractiv.com