News

Article

Ethical Considerations for AI in Clinical Decision-Making

Author(s):

Speakers at the European Respiratory Society Congress 2024 highlighted the potential of artificial intelligence (AI) in transforming respiratory health care, while also raising important ethical concerns related to autonomy, equity, transparency, and sustainability.

The European Respiratory Society (ERS) Congress 2024 kicked off its meeting Saturday with a highly anticipated session on the ethical implications of artificial intelligence (AI) in clinical decision-making, reflecting the event’s theme, "Humans and Machines: Getting the Balance Right."1

AI is already transforming health care through applications like diagnostic imaging, predictive analytics, and personalized treatment plans, as well as being used in several FDA-approved medical devices to help tailor therapy. However, its rapid uptake also brings ethical challenges that require careful consideration.

So, what should health care providers know about integrating AI into their practice?

4 (or 5) Key Principles to Consider

A central topic of discussion was AI’s impact on key ethical principles in health care, starting with autonomy. According to panelist Joshua Hatherley, PhD, postdoctoral fellow at Aarhus University in Denmark, some potential benefits of AI include enhancing the informed consent process, allowing patients to make more informed decisions through better access to personalized information. AI’s ability to predict preferences for patients who may not be able to express their wishes, such as those with cognitive impairments, also holds promise. However, concerns arise when the technology’s embedded ethical values do not align with the patient’s priorities for clinical decision-making. For instance, AI could prioritize specific medical treatments and measurable outcomes over a patient’s quality of life, potentially undermining patient autonomy if they have different goals in mind.

“There are some patients who care more about the quality of their life, and so by providing and prioritizing the recommendations in this way, there is potential for the recommendations of these systems to be misaligned with the values and preferences of the patient service,” Hatherley said.

In terms of beneficence—the moral obligation to do good for the patient—AI is poised to improve health outcomes and optimize health care resources. Predictive analytics, clinical decision support, and resource management systems can streamline care delivery. Yet, AI faces serious limitations in generalizability. As models are trained on specific data sets, their performance may degrade over time or when applied to different populations. Hatherley credited the distribution shift for this decline, which happens when a machine learning tool starts working with new data, causing the tool’s predictions to become less accurate over time as patterns change.

Reproducibility also remains a challenge, and regulatory frameworks for evaluating AI’s clinical utility are still evolving. Hatherley noted that some US regulatory agencies have suggested having these systems continuously learn over time so they can become more personalized for each patient. However, he echoed his sentiment on the risks of continuous learning when not done mindfully, especially when trying to replicate personalized AI systems across different clinical sites.

“I think in order to ensure that these systems actually contribute to personalized medicine, they need to be generalizable,” Hatherley said.

The principle of nonmaleficence, or "do no harm," was also a critical point of discussion. According to Hatherley, AI could reduce medical errors and alleviate practitioner burnout by automating routine tasks, such as diagnosis or administrative work. However, the risk of automation bias—where clinicians overly rely on AI recommendations—and algorithmic conversion—which simplifies complex clinical decisions into AI-generated outputs—raises concerns about AI potentially causing harm if used inappropriately. It’s important to view AI as a tool to work alongside in health care, not be completely replaced by without question.

Equity in health care was another significant topic, aligning with the principle of justice. AI can help expand access to underserved populations, improving care for rural areas and low-resource regions. By improving access to health care, this tool could play a pivotal role in addressing disparities. However, algorithmic bias remains a significant threat. AI systems trained on incomplete or biased data sets can reinforce existing inequalities, particularly for minority and underserved populations. Hatherley also raised concerns about how AI could inflate health care costs, making it less accessible for those who need it most.

There is also a growing ethical focus on a fifth principle: explainability, or the ability to understand how AI systems make decisions. While AI offers incredible computational power, many machine learning models, particularly deep learning systems, operate as "black boxes," making it difficult for clinicians to interpret their reasoning. Explainable AI methods are being developed, but they are far from perfect. The opacity of these systems poses challenges, particularly when clinicians must justify their decisions to patients or in legal contexts. Despite these concerns, the debate continues on whether a lack of transparency truly impedes clinical outcomes.

AI Potential and Challenges

Doctor holding tablet with AI graphics | Image credit: LALAKA – stock.adobe.com

Doctor holding tablet with AI graphics | Image credit: LALAKA – stock.adobe.com

Another key theme was the gap between AI's potential and the practical challenges of its integration into health care. According to panelist Joseph Alderman, a clinical researcher and PhD student specializing in anesthesia and intensive care medicine at University Hospitals Birmingham NHS Foundation Trust, AI has enabled innovations in respiratory medicine such as thoracic imaging for nodule detection and AI-assisted home spirometry, but there are concerns about its reliability.

For instance, Alderman noted a study that revealed AI flagged nearly 20% of patients for sepsis, yet missed 67% of cases, highlighting the need for improved accuracy.2 Similarly, other models demonstrated biases, performing worse at predicting acute kidney injury in women compared with men and underdiagnosing racial minority groups, which can lead to already-underserved populations facing even more limitations to health care access.3,4 Such disparities underscore the need for more representative and comprehensive data in training AI systems. According to Alderman, “none of this should be a surprise.”

“Algorithms are like mirrors,” he proposed. “We provide them data, that data comes from the health care systems and societies, and the algorithms basically reuse those data and generate models which reflect the real world.”

The environmental cost of AI also emerged as an important consideration. AI technologies and cryptocurrencies accounted for nearly 2% of global power demand in 2022, and this figure is projected to double by 2026, rivaling the energy consumption of entire nations like Japan.5 According to Alderman, the sustainability of AI—both in terms of energy use and health care affordability—is now a pressing concern that cannot be overlooked.

Moving forward, safe and effective AI models must be developed through rigorous clinical trials and with robust regulatory oversight. Ensuring that AI is equitable requires guidelines on who is represented in AI training data, how decisions are made, and how systems are evaluated. Additionally, AI needs to be cost-effective and sustainable, aligning its development with the real-world needs of clinicians and patients.

Ultimately, AI’s role in clinical decision-making will be determined by how well the health care community can balance technological innovation with ethical responsibility. While AI holds the promise of revolutionizing health care, its integration must be carefully managed to avoid reinforcing existing disparities or introducing new ethical dilemmas.

References

  1. Prosch H, Hui I, De Wever W, Hatherley J, Alderman J. Getting the balance right: the ethics of artificial intelligence in clinical decision-making. ERS Congress 2024 webinar. Presented September 7, 2024. https://live.ersnet.org/programme/session/93159
  2. Wong A, Otles E, Donnelly JP, et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021;181(8):1065-1070. doi:10.1001/jamainternmed.2021.2626
  3. Cao J, Zhang X, Shahinian V, et al. Generalizability of an acute kidney injury prediction model across health systems. Nat Mach Intell. 2022;4(12):1121-1129. doi:10.1038/s42256-022-00563-8
  4. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342
  5. Electricity 2024 analysis and forecast to 2026. IEA. January 24, 2024. Accessed September 7, 2024. https://www.iea.org/reports/electricity-2024
Related Videos
ERS 2024 Recap
Matthew Zachary, founder of Stupid Cancer.
Klaus Rabe, MD, PhD, chest physician and professor of medicine, University of Kiel
Ana Baramidze, MD, PhD
Eva Otter, president of PHA Europe
Samyukta Mullangi, MD, MBA.
Alexander Mathioudakis, MD, PhD, clinical lecturer in respiratory medicine, The University of Manchester
Wanda Phipatanakul, MD, MS, professor of pediatrics, Harvard Medical School; director of the Clinical Research Center, Boston Children's Hospital
Io Hui, PhD, researcher at The University of Edinburgh
Anna-Maria Hoffmann-Vold, MD, PhD, a senior consultant and leader of inflammatory and fibrotic research area at Oslo University Hospital
Related Content
CH LogoCenter for Biosimilars Logo