Publication

Article

The American Journal of Managed Care

Special Issue: Health IT
Volume30
Issue SP 6
Pages: SP425-SP427

Understanding the Complexities of Equity Within the Emergence and Utilization of AI in Academic Medical Centers

This editorial discusses positions for academic medical centers to consider when designing and implementing artificial intelligence (AI) tools.

Am J Manag Care. 2024;30(Spec Issue No. 6):SP425-SP427. https://doi.org/10.37765/ajmc.2024.89547

_____

Takeaway Points

  • Artificial intelligence (AI) developers and researchers must regularly evaluate data collection and quality of data collected to ensure that accurate representation of underrepresented populations is completely considered.
  • AI developers and researchers must create inclusive strategies for AI-driven solution accuracy and implementation of current and future AI technology.
  • AI developers and researchers must ensure equitable dissemination to ensure that the AI is truly patient centered and inclusive.

_____

Artificial intelligence (AI) is a complex discipline that uses computers and technology to mimic intelligent human behavior and critical thinking skills of humans.1 The term was coined in 1956 by John McCarthy, PhD, as he described the science and development of making intelligent machines.2 Initially conceptualized from a series of simple “if-then” rules, AI has evolved over many decades to include more intricate algorithms that behave similarly to the human brain2 and has been widely applied to numerous fields, playing a significant role in technological improvements, including robotics, finance, education, and health care.3

In health care, AI has played an important role in disease diagnosis, treatment planning, patient care, and clinical research.4 The aim of this editorial is to offer positions for academic medical centers to consider when designing and implementing AI tools and algorithms for widespread adoption and impact. As we examine the implications of AI integration in an academic medical center setting, we encourage clinicians, developers, and stakeholders to reflect on the following ethical and equitable implications of such integration.

Currently, the 4 ways that AI can be approached are through machine learning (ML), cognitive computing (CC), deep learning (DL),5 and natural language processing (NLP).2 ML is the application of specific traits to identify patterns that could be used to evaluate an individual situation.2 ML evolved into what is now DL, which comprises algorithms to develop an artificial neural network that can learn and make its own decisions.2 CC is the practice of a computer gaining information and comprehension from a series of images or videos.2 NLP allows computers to obtain data from human language and use those data to make decisions.2

The ethical considerations for academic medical centers surrounding the integration of AI are vast. This commentary will highlight the following considerations: explainability, responsible AI, intended utilization, clinical decision-making, representation, and health care setting integration. Understanding and addressing those considerations is paramount to taking advantage of AI’s full potential in health care.6 A 2019 study identified emerging themes of fairness, justice, privacy, transparency, and responsibility, and a 2020 study identified themes including accountability, transparency, privacy, and fostering human values as being of the utmost importance when considering developing or implementing AI.7 Due to the sensitive nature of health information, it is imperative that informed consent be obtained before incorporation of individual health information within the algorithm. Along with consent, there needs to be a level of detail given to the patient about the use of AI in their individual diagnosis or treatment case. When the clinician cannot adequately explain how patients’ medical information has been used in an algorithm that either aids or completely guides the clinical decision-making process, this can lead to patient mistrust in the clinician and the health care system.8 The success of integration, as well as the reputation of the academic medical center and its health care professionals, depends on AI being able to meet those needs of explainability.8

Promoting health equity is a necessary objective of any algorithm used in health care.9 The assurance of using responsible and unbiased AI is a tremendous consideration that has multiple moving parts. AI algorithms should be developed to reduce health care disparities and promote fairness.9 Responsible AI “is about human responsibility for the development of intelligent systems along with fundamental human principles and values, to ensure human flourishing and well-being in a sustainable world,” according to Dignum.10,11 This challenge has widespread health care implications and heavily depends on the completeness of health care data used in the AI algorithm. Studies by Cutillo and colleagues12 as well as Laato and colleagues13 call for use of transparent AI models, highlighting the belief that those models are critical to ensuring fairness and prevention of bias.14

Extending from the previous ethical considerations is the intended utilization of the AI being integrated within the health care setting. The ability to mimic human behavior and reasoning is at the heart of the benefit of AI, but one factor that must be understood is where the foundation for such reasoning stems from. That foundation is actually large quantities of existing data that have been used to create patterns.2,15 The reasoning and the patterns that emerge from the data can be used in disease diagnosis, medical imaging diagnosis, drug detection and analysis, smart health records, remote patient monitoring, epidemic predictions, and clinical trials and other research.15 The regulatory challenges that academic medical centers can face lie in AI’s ability to, often unexplainably, auto-update, improve, and evolve its decision-making processes over time.8 Patterns that emerge from the data may be appropriate for one algorithm but may still worsen health outcomes due to the inability to generalize the data that feed these algorithms. That becomes particularly problematic when the algorithm lacks the ability to distinguish between the data collected to train the algorithm and the real-world population differences of patients based on characteristics outside the medical condition.16,17

The impact that AI has on the clinical decision-making process is another important consideration for academic medical centers. AI has the potential to completely transform clinical decision-making with the ability to process large amounts of health data, including biomarkers, genomic data, and phenotypic data that are collected across a health care system.16 Technology proponents have argued that poor clinical decision-making is the result of human cognition, but the advent of AI-assisted and AI-driven clinical decision-making could dramatically increase the diagnostic probability by determining the most optimal decision based on minimizing uncertainty.18 With the emergence and encouragement of using evidence-based and personalized medicine, AI can allow the clinician to create completely unique treatment plans and other therapies for patients, which may also positively impact patient quality of care.19 With AI technology having access to large amounts of patient data, the ethical concern for academic medical centers is inappropriate application of data used to make clinical support decisions. Negative consequences resulting from inappropriate use of the data include a decline in the quality of care delivered, an increase in patient safety concerns, and other ethical issues such as untended harm from flawed biases in the developed technology.17,19

Addressing the lack of diverse representation among the developers and researchers who are engaged in the development and implementation of AI technology presents a unique dilemma. Ensuring the diversity of AI development teams is one important step against bias.20 Not only does there need to be diversity in the development of the technology, but there also needs to be diversity in the data that are used to train the AI. Constantly feeding homogeneous patient data in terms of the same demographics and characteristics will severely limit generalizability of the results found by the algorithm, often yielding quite biased AI solutions.21 Development and research team diversity can address those issues of nongeneralizability and bias by averaging out developer subgroup prediction errors as well as expanding the questions or phenomena that AI is supposed to address.20 AI advancement cannot drive health care toward a state of equity; it must be coupled with the inclusion of underrepresented groups in the development of the technology as well as in the types of data that train the implemented technology.21

The last consideration for academic medical centers is understanding how the introduction of AI into health care settingscan exacerbate health care disparities. Academic medical centers must anticipate and address the potential concerns about health care disparities that the introduction of AI-based health care solutions can present.22 As academic medical centers have greater reliance on AI-driven clinical decision support tools, it is imperative that the solutions being supported by these tools are not worsening health care disparities. Just like other digital technologies that have transformed health care and clinical research, academic medical centers need to ensure not only an improvement in access to health care for underrepresented populations but also an improvement in access to new and novel technologies that are fueling health care delivery.23 With many academic medical centers focusing on social determinants of health, the digital divide should also be included as a contributing factor to unmet health care needs. In 2020 remarks on technological change, UN Secretary-General Antonio Guterres noted that the digital divide “is threatening to become the new face of inequality, reinforcing the social and economic disadvantages suffered by women and girls, people with disabilities, and minorities of all kinds.”24 The COVID-19 pandemic placed a spotlight on the digital divide, as underrepresented populations showed considerably low utilization of telemedicine, resulting in limited access to health care.25

In an effort to use lessons learned from the COVID-19 pandemic, clinicians, AI developers, and researchers must regularly assess and immediately address data collection and quality of data collected to ensure that accurate representation of underrepresented populations is completely considered, create inclusive strategies for AI-driven solution implementation and to improve the accuracy of current and future AI technology, and prioritize equitable dissemination of this technology to ensure that the AI is truly patient centered and inclusive.

Acknowledgments

The authors would like to thank the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery and the Office for Health Equity and Inclusion for their support in the development of this commentary. Dr Bonner takes full responsibility for the work as a whole.

Author Affiliations: Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic (TJB, PA, RCB), Rochester, MN; Department of Anesthesiology and Perioperative Medicine, Mayo Clinic (AJM), Phoenix, AZ.

Source of Funding: None.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (TJB, PA, AJM, RCB); drafting of the manuscript (TJB, PA, AJM, RCB); and critical revision of the manuscript for important intellectual content (TJB, PA, AJM, RCB).

Address Correspondence to: Timethia J. Bonner, DPM, PhD, Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, 200 First St SW, Rochester, MN 55905. Email: bonner.timethia@mayo.edu.

REFERENCES

1. Amisha, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care. 2019;8(7):2328-2331. doi:10.4103/jfmpc.jfmpc_440_19

2. Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92(4):807-812. doi:10.1016/j.gie.2020.06.040

3. Al-Medfa MK, Al-Ansari AMS, Darwish AH, Qreeballa TA, Jahrami H. Physicians’ attitudes and knowledge toward artificial intelligence in medicine: benefits and drawbacks. Heliyon. 2023;9(4):e14744. doi:10.1016/j.heliyon.2023.e14744

4. Elasan S, Ates¸ Y. Artificial intelligence (AI) and ethics in medicine at a global level: benefits and risks. In: Karaman E, Önder GÖ, eds. Current Researches in Health Sciences-II. Özgür Publications; 2023:51.

5. Le Nguyen T, Do TT. Artificial intelligence in healthcare: a new technology benefit for both patients and doctors. In: 2019 Portland International Conference on Management of Engineering and Technology (PICMET). IEEE; 2019:1-15. doi:10.23919/PICMET.2019.8893884

6. Elendu C, Amaechi DC, Elendu TC, et al. Ethical implications of AI and robotics in healthcare: a review. Medicine (Baltimore). 2023;102(50):e36671. doi:10.1097/MD.0000000000036671

7. Roche C, Wall PJ, Lewis D. Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI Ethics. 2023;3(4):1095-1115. doi:10.1007/s43681-022-00218-9

8. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc. 2020;27(3):491-497. doi:10.1093/jamia/ocz192

9. Chin MH, Afsar-Manesh N, Bierman AS, et al. Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Netw Open. 2023;6(12):e2345050. doi:10.1001/jamanetworkopen.2023.45050

10. Dignum V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer; 2019.

11. Gupta S, Kamboj S, Bag S. Role of risks in the development of responsible artificial intelligence in the digital healthcare domain. Inf Syst Front. 2023;25:2257-2275. doi:10.1007/s10796-021-10174-0

12. Cutillo CM, Sharma KR, Foschini L, Kundu S, Mackintosh M, Mandl KD; MI in Healthcare Workshop Working Group. Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit Med. 2020;3:47. doi:10.1038/s41746-020-0254-2

13. Laato S, Tiainen M, Najmul Islam AKM, Mäntymäki M. How to explain AI systems to end users: a systematic literature review and research agenda. Internet Res. 2022;32(7):1-31.

14. Upadhyay U, Gradisek A, Iqbal U, Dhar E, Li YC, Syed-Abdul S. Call for the responsible artificial intelligence in the healthcare. BMJ Health Care Inform. 2023;30(1):e100920. doi:10.1136/bmjhci-2023-100920

15. Siddique S, Chow JCL. Machine learning in healthcare communication. Encyclopedia (Basel, 2021). 2021;1(1):220-239. doi:10.3390/encyclopedia1010021

16. Magrabi F, Ammenwerth E, McNair JB, et al. Artificial intelligence in clinical decision support: challenges for evaluating AI and practical implications. Yearb Medical Inform. 2019;28(1):128-134. doi:10.1055/s-0039-1677903

17. O’Connor MI. Equity360: gender, race, and ethnicity-the power of AI to improve or worsen health disparities. Clin Orthop Relat Res. 2024;482(4):591-594. doi:10.1097/CORR.0000000000002986

18. Harish V, Morgado F, Stern AD, Das S. Artificial intelligence and clinical decision making: the new nature of medical uncertainty. Acad Med. 2021;96(1):31-36. doi:10.1097/ACM.0000000000003707

19. Bajgain B, Lorenzetti D, Lee J, Sauro K. Determinants of implementing artificial intelligence-based clinical decision support tools in healthcare: a scoping review protocol. BMJ Open. 2023;13(2):e068373. doi:10.1136/bmjopen-2022-068373

20. de Hond AAH, van Buchem MM, Hernandez-Boussard T. Picture a data scientist: a call to action for increasing diversity, equity, and inclusion in the age of AI. J Am Med Inform Assoc. 2022;29(12):2178-2181. doi:10.1093/jamia/ocac156

21. Celi LA, Cellini J, Charpignon ML, et al; MIT Critical Data. Sources of bias in artificial intelligence that perpetuate healthcare disparities-a global review. PLOS Digit Health. 2022;1(3):e0000022. doi:10.1371/journal.pdig.0000022

22. Dankwa-Mullan I, Scheufele EL, Matheny ME, et al. A proposed framework on integrating health equity and racial justice into the artificial intelligence development lifecycle. J Health Care Poor Underserved. 2021;32(2):300-317. doi:10.1353/hpu.2021.0065

23. Douthit BJ, McCoy AB, Nelson SD. The impact of clinical decision support on health disparities and the digital divide. Yearb Med Inform. 2023;32(1):169-178. doi:10.1055/s-0043-1768722

24. Digital divide ‘a matter of life and death’ amid COVID-19 crisis, secretary‑general warns virtual meeting, stressing universal connectivity key for health, development. News release. United Nations. June 11, 2020. Accessed April 1, 2024. https://press.un.org/en/2020/sgsm20118.doc.htm

25. Adedinsewo D, Eberly L, Sokumbi O, Rodriguez JA, Patten CA, Brewer LC. Health disparities, clinical trials, and the digital divide thematic reviews on forward thinking on clinical trials in clinical practice. Mayo Clin Proc. 2023;98(12):1875-1887. doi:10.1016/j.mayocp.2023.05.003

Related Videos
Keith Ferdinand, MD, professor of medicine, Gerald S. Berenson chair in preventative cardiology, Tulane University School of Medicine
Robin Glasco, MBA
Dr Cesar Davila-Chapa
Screenshot of an interview with Nadine Barrett, PhD
Masanori Aikawa, MD
Dr Bonnie Qin
Dr Bonnie Qin
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Screenshot of Stephanie Hsia, PharmD
Cesar Davila-Chapa, MD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo