Publication

Article

The American Journal of Managed Care

September 2024
Volume30
Issue 9
Pages: e258-e265

Knowledge, Attitude, and Practices Regarding ChatGPT Among Health Care Professionals

This web-based cross-sectional study indicated that health care professionals in China had poor knowledge, positive attitudes, and proactive practices in regard to ChatGPT.

ABSTRACT

Objective: To explore the knowledge, attitudes, and practices (KAP) in regard to ChatGPT among health care professionals (HCPs).

Study Design: Cross-sectional study.

Methods: This web-based cross-sectional study included HCPs working at the First Affiliated Hospital of Anhui Medical University in China between August 2023 and September 2023. Participants unwilling to use ChatGPT were excluded. Correlations between KAP scores were evaluated by Pearson correlation analysis and structural equation modeling (SEM).

Results: A total of 543 valid questionnaires were collected; of these, 231 questionnaires (42.54%) were completed by male HCPs. Mean (SD) knowledge, attitude, and practice scores were 6.71 (3.24) (range, 0-12), 21.27 (2.73) (range, 6-30), and 47.91 (8.17) (range, 12-60), respectively, indicating poor knowledge (55.92%), positive attitudes (70.90%), and proactive practices (79.85%). The knowledge scores were positively correlated with attitude (Pearson r = 0.216; P < .001) and practice (Pearson r = 0.283; P < .001) scores, and the attitude scores were positively correlated with practice scores (Pearson r = 0.479; P < .001). SEM showed that knowledge influenced attitude positively (β = 0.498; P < .001) but negatively influenced practice part 1 (improving work efficiency and patient experience) (β = –0.301; P < .001), practice part 2 (helping advance medical research) (β = –0.436; P < .001), practice part 3 (assisting HCPs) (β = –0.338; P < .001), and practice part 4 (the possibilities) (β = –0.242; P < .001). Attitude positively influenced practice part 1 (β = 1.430; P < .001), practice part 2 (β = 1.581; P < .001), practice part 3 (β = 1.513; P < .001), and practice part 4 (β = 1.387; P < .001).

Conclusion: HCPs willing to use ChatGPT in China showed poor knowledge, positive attitudes, and proactive practices regarding ChatGPT.

Am J Manag Care. 2024;30(9):e258-e265. https://doi.org/10.37765/ajmc.2024.89604

_____

Takeaway Points

  • This study demonstrated poor knowledge, positive attitudes, and proactive practices among health care professionals in regard to ChatGPT.
  • The findings fill in the gaps of previous studies and have potential to help in the design of appropriate educational interventions for using ChatGPT properly among health care professionals.
  • Moreover, physician knowledge about artificial intelligence in general must be improved to avoid its potential dangers.

_____

On November 30, 2022, OpenAI launched chat generative pretrained transformer (ChatGPT), a language model–based chatbot based on supervised learning and reinforcement learning.1 ChatGPT allows users to refine and steer conversations toward the requested length, format, style, amount of detail, and language level.2 Users can ask ChatGPT to write text on a given subject of a selected length and with a language level suitable for the target audience.3,4 It also introduces variability for the same prompt, decreasing the likeliness of duplication. Of note, in academic research, ChatGPT can write introductions for research papers and abstracts.5-7 Some researchers have even listed ChatGPT as a coauthor.8 Journals like Nature and JAMA Network publications require authors to disclose such tools, whereas Science banned its use.9 ChatGPT has several limitations that can hinder its use in academia, including bias and offensiveness,10 plausible-sounding but incorrect answers (also called hallucinations),11,12 limited knowledge of events and facts that occurred after September 2021,13-15 and algorithm biases.10 Besides writing text, ChatGPT has been used for clinical decision-making16-18; however, its value remains uncertain until confirmation of potential biases and limitations.

Despite the increasing popularity and high performance of ChatGPT, its use in writing text can lead to the spread of misinformation.19 When using ChatGPT for academic or health care–related text, users must adequately evaluate the pros and cons of ChatGPT to prevent misinformation.4,20 In addition, ChatGPT should not be used for clinical decision-making until formal validation is performed. Therefore, proper knowledge, attitudes, and practices (KAP) regarding ChatGPT are essential among health care professionals (HCPs) for the appropriate use of ChatGPT. KAP studies provide quantitative and qualitative data on knowledge gaps, misconceptions, and misunderstandings that can hinder the proper, adequate, and optimal performance of a specific subject in a given population.21,22 Because ChatGPT is a relatively novel discovery that launched less than a year before this study was performed, limited KAP data are available. An ongoing study is examining ChatGPT KAP among pharmacists.23

Therefore, this study aimed to examine KAP levels among HCPs willing to use ChatGPT in hospitals in China.

METHODS

Study Design and Participants

This cross-sectional study enrolled HCPs working at the First Affiliated Hospital of Anhui University at Hefei city between August 2023 and September 2023. The ethics committee of the First Affiliated Hospital of Anhui University approved this study. Written informed consent was obtained from the participants before they completed the questionnaire.

The inclusion criteria were (1) HCPs, including but not limited to doctors, nurses, pharmacists, hospital administrators, and medical technicians; (2) held an active, valid medical professional license and received systematic medical education; (3) was working full time or part time in a public or private health care organization in a medical or health care role; (4) had an understanding of artificial intelligence (AI) technology and its application possibilities; (5) was familiar with or had heard of machine learning models such as ChatGPT; (6) had used or had the desire to use AI technologies (including ChatGPT) to improve outcomes in health care work; (7) was self-educated or participated in lectures, seminars, and other activities on the application of AI technology in the medical industry; and (8) voluntarily participated in this study and had the time, willingness, and ability to complete the questionnaire as accurately and completely as possible.

The exclusion criteria were (1) did not hold a legal medical professional license or did not receive a systematic medical education; (2) was not working in a health care organization or had a non–health care position in a health care organization; (3) lacked knowledge of AI technology, especially machine learning models such as ChatGPT; (4) was unwilling to accept or try to use AI technology in health care work; (5) never explored the application of AI in the health care industry or participated in related learning and seminar activities; (6) was unable or unwilling to provide valid, authentic, and detailed personal opinions and work or practice data; or (7) was unable to understand the purpose and content of this study or answer the questionnaire.

Questionnaire Design

A self-designed questionnaire containing 4 dimensions was designed and modified to address the comments made by 4 experts in medicine, medical informatics, and AI, deleting some similar or repeated questions and adjusting and refining some unclear questions. Expert 1 was the CEO of an AI health care company with more than 20 years of experience in the application of AI in clinical scenarios. Expert 2 was president of the Chinese Medical Association Neurology Branch, with rich experience in the integration of AI and precision medicine. Expert 3 was a member of the Chinese Medical Association Neurology Committee and a senior medical consultant of an AI health care company. Expert 4 was an executive committee member of the Chinese Medical Association Neurology Branch and clinically proficient in the diagnosis and treatment of minimally invasive neurosurgical diseases. The experts were carefully selected based on their extensive experience in both medicine and AI. A pilot study was conducted among 30 participants, with a resulting Cronbach α value of 0.936, suggesting good internal consistency.

The final questionnaire consisted of inquiries about (1) demographic characteristics, including age, sex, occupation, education, department, hospital type, and experience of scientific research; (2) ChatGPT knowledge dimension, which included 6 questions with answer choices of “very knowledgeable," “heard of," and “unclear” (with 2 points given for being very knowledgeable, 1 point for having heard of it, and 0 points for unclear); (3) ChatGPT attitude dimension, which included responses to 6 questions that were evaluated using a 5-point Likert scale ranging from strongly agree (5 points) to strongly disagree (1 point); and (4) ChatGPT practice dimension, which consisted of 12 questions evaluated on a 5-point Likert scale ranging from always (5 points) to never (1 point). The practice dimension was subdivided into 4 aspects: P1 was about ChatGPT as an auxiliary tool to improve work efficiency and patients’ experience with medical consultations, P2 was about ChatGPT as an innovative tool to help advance medical research, P3 was about ChatGPT for assisting HCPs in improving work efficiency in certain situations and steps, and P4 was about the possibilities of ChatGPT. Higher scores indicated more adequate knowledge, positive attitudes, and proactive practices.

The online questionnaire was constructed using the WeChat-based Questionnaire Star app. A QR code was generated to collect data via WeChat, and the participants were requested to log in and fill out the questionnaire by scanning the QR code sent via WeChat. Consultations and contact with research institutes or relevant departments of 10 target hospitals (covering Central, Southern, and Eastern China) were conducted via telephone. Two of these hospitals were unable to participate. The final distribution and collection of questionnaires mainly relied on social media platforms such as WeChat and included the following steps: (1) creating the questionnaire: The questionnaire was made using the app, ensuring that the questions were appropriate for the target participants regarding form and content; (2) sharing the questionnaire: The generated link or QR code was shared with WeChat groups, public accounts, and moments, a commonly used social networking platform in China, or sent directly to specific WeChat contacts; (3) filling in the questionnaire: The participants filled in the questionnaire by clicking the link or scanning the QR code; and (4) collecting responses: After participants completed the questionnaire, the app automatically collected and organized their responses.

The study used the Bloom cutoff24; scores below 60% of the total score were considered poor, and scores of 60% or higher were considered adequate/positive/proactive. This study evaluated the overall KAP levels of the participants, not focusing on the KAP of a given individual.

Quality Control

All items were mandatory for submission to ensure the quality and completeness of the questionnaire results. The research team members checked all questionnaires for completeness, internal consistency, and reasonableness.

Two research assistants participated in the distribution and collection of the questionnaires after training, which included the following: (1) clarification of research objectives to ensure the research assistants understood the goals and objectives of the study and the importance of the questionnaire to make appropriate decisions during implementation; (2) questioning techniques, including how to ask questions accurately and clearly and how to avoid ambiguity or misinterpretation during data collection; (3) rules for questionnaire distribution, which encompassed detailed information on how to distribute the questionnaires, including the definition of the target population and how to achieve a sample balance for statistical significance; (4) how to properly collect and record data to ensure data completeness and accuracy; (5) ethical guidance on the importance of adhering to research ethics, such as obtaining explicit consent from respondents and ensuring the confidentiality of respondents’ information; and (6) simulation practice on the questionnaire distribution process to ensure that the survey would be conducted effectively and professionally.

Questionnaires were excluded if they had poor data quality (the participants chose answers randomly or gave conflicting answers to multiple similar questions), abnormal answers (some answers were too outlying or significantly different from the majority of answers), or quickly answered questions (the participants completed the questionnaire in a very short period of time) or were from respondents who withdrew from the survey (the respondents chose to withdraw after responding to the questionnaire).

Statistical Analysis

Statistical software Stata 17.0 (StataCorp LLC) was used for statistical analysis. The continuous variables were described as mean (SD) and analyzed using t tests or analysis of variance. The categorical variables were described using n (%). The Pearson correlation analysis was used to analyze the correlation between KAP scores. Structural equation modeling (SEM) was used to test the following hypotheses regarding the interrelationships between KAP dimensions: (1) the knowledge had a direct effect on attitude, (2) the knowledge had a direct effect on the 4 parts of practice, and (3) the attitude had a direct effect on the 4 parts of practice. Practice was analyzed in 4 parts to ensure that the different concepts were analyzed separately. Two-sided P less than .05 represented statistical significance.

RESULTS

Among the 551 questionnaires that were handed out, 8 HCPs declined participation and were excluded, resulting in 543 valid questionnaires being included in this analysis (Table 1). The participants were mostly female HCPs (n = 312; 57.46%), doctors (n = 427; 78.64%), and working at public tertiary hospitals (n = 352; 64.83%). In addition, 260 (47.88%) had no published research, and 381 (70.17%) had no experience with scientific projects. Mean (SD) knowledge, attitude, and practice scores were 6.71 (3.24) (range, 0-12), 21.27 (2.73) (range, 6-30), and 47.91 (8.17) (range, 12-60), respectively, indicating poor knowledge (55.92%), positive attitudes (70.90%), and proactive practices (79.85%). Male participants, other medical practitioners, and participants from departments other than clinical or medical technology departments were likely to show higher knowledge scores. Male participants also were likely to show higher practice scores compared with female participants.

The knowledge item with the lowest score was K5 (70.72% chose “very knowledgeable” or “heard of”; “Are you familiar with the tips for using ChatGPT?”) and the item with the highest score was K1 (92.63% chose “very knowledgeable” or “heard of”; “ChatGPT is a natural language processing system based on artificial intelligence technology, and developed by Open AI; it can understand and generate human language by processing a large amount of language data, thus achieving intelligent conversation interaction.”) (eAppendix Figure 1 [eAppendix available at ajmc.com]). The attitude item with the lowest score was A1 (34.44% chose “strongly agree” or “agree”; “Despite ChatGPT’s continuous learning and improvement to enhance accuracy, I still believe it cannot be fully trusted.”), whereas the attitude item with the highest score was A3.2 (92.27% chose “strongly agree” or “agree”; “It may improve daily work efficiency.”) (eAppendix Figure 2). The practice item with the lowest score was P1.1 (66.66% chose “always” or “often”; “As a tool for communication with patients”), and the item with the highest score was P3.2 (82.69% chose “always” or “often”; “Assisting health care staff in simple administrative work”), a large portion (82.14% chose “always” or “often”) of the participants supported ChatGPT in assisting researchers with quickly obtaining necessary medical literature (eAppendix Figure 3) (Table 2 [part A and part B]).

Pearson correlation analysis showed that the knowledge scores were positively correlated with attitude (Pearson r = 0.216; P < .001) and practice (Pearson r = 0.283; P < .001) scores and that the attitude scores were positively correlated with the practice scores (Pearson r = 0.479; P < .001) (Table 3). SEM (Table 4 and Figure) showed that knowledge directly positively influenced attitude (β = 0.498; P < .001) but directly negatively influenced P1 (β = –0.301; P < .001), P2 (β = –0.436; P < .001), P3 (β = –0.338; P < .001), and P4 (β = –0.242; P < .001). Attitude directly positively influenced P1 (β = 1.430; P < .001), P2 (β = 1.581; P < .001), P3 (β = 1.513; P < .001), and P4 (β=1.387; P < .001). The incremental fit index (0.863 > 0.8), Tucker-Lewis index (0.844 > 0.8), and comparative fit index (0.862 > 0.8) showed good model fit.

DISCUSSION

This study explored KAP in regard to ChatGPT among HCPs in China willing to use the chatbot, and our results suggest that these HCPs had poor knowledge, positive attitudes, and proactive practices regarding ChatGPT. These findings might provide valuable evidence for the application of ChatGPT in clinical practice.

ChatGPT is a potentially powerful AI tool that could help with clinical decision-making16-18 and manuscript writing3,4; however, it has several biases and potential dangers. ChatGPT is subject to hallucinations (the propensity to provide answers that seem plausible but are incorrect)11,12 and has several biases that could affect its answers.10,13-15 Therefore, positive attitudes and proactive practices must be addressed with care.

The Pearson correlationanalysis showed that the KAP dimensions were positively correlated, but the SEM analysis indicated that although knowledge had a positive impact on attitude, it also had a negative impact on practice, which could be because of a stronger indirect effect through attitude. Interestingly, our study found that greater knowledge of ChatGPT was negatively associated with its use among HCPs. This suggests that a better understanding of the possibilities, limitations, and potential dangers of AI may lead to more cautious behavior and less proactive use of ChatGPT. As HCPs become more aware of its deficiencies, they may develop more realistic expectations about its role in medicine and hospitals, resulting in a more restrained approach to its adoption. Male participants, medical technicians, and medical technology departments scored higher on practice. These results are consistent with those of previous studies reporting higher use of AI by male vs female individuals.25

ChatGPT was launched in November 2022, and the present manuscript was written in August 2023; besides 1 ongoing study, no data were available on ChatGPT KAP among HCPs during this period.23 However, findings from a study in Pakistan showed that most medical students and physicians had poor knowledge of AI but favorable attitudes and practices.26 This discrepancy could be because our study only included participants who used or planned to use ChatGPT. Consistent with our results, Temsah et al found that HCPs were reluctant to use ChatGPT and other chatbots for diagnosis or treatment, stating that such tools should not be used without expert supervision until their trustworthiness has been proved.27 Howard et al found that issues with situational awareness, inference, and consistency were major factors hindering the use of chatbots in clinical settings.28 Furthermore, there are issues related to credibility, information sources, medicolegal considerations, resistance to use, patient confidentiality, and personalized care.11,27,28 Findings from previous studies indicate that many HCPs use ChatGPT to generate educational materials for patients and communities quickly29-31; however, inaccuracies could result in misinformation, which is why such materials should be carefully reviewed.32

In this study, the practice dimension was subdivided into “improving work efficiency and patient experience,” “ChatGPT as an innovative tool to help advance medical research,” “assisting health care professionals in improving work efficiency in certain situations and steps,” and the “possibilities of ChatGPT.” Our results show that HCPs were more willing to use ChatGPT to answer common patient questions and manage appointments than as a communication tool. In addition, they mostly identified ChatGPT as a tool used to quickly obtain medical literature, which should be done with caution given the propensity of ChatGPT to hallucinate11,12—potentially resulting in the use of false or inaccurate information to treat patients. ChatGPT is a promising tool that could improve efficiency, and many HCPs in our study were willing to incorporate it into their practice.

AI in medicine has several interesting uses, including summarizing knowledge on a specific subject, automated decision-making, and patient triage. ChatGPT is a conversational tool that can mainly be used to summarize knowledge and write text. Although ChatGPT currently has limited application in medicine, it is only the tip of the iceberg in relation to the forthcoming novel applications of AI in medicine. Still, evaluating KAP for more elaborate applications of AI in medicine is currently not possible because such AI applications are still mostly experimental and unavailable to the public. Importantly, our study findings indicate that physician knowledge about AI must be improved to avoid potential dangers. AI systems are not perfect, so caution must be taken when using them.

Limitations

Although this was a multicenter study, the sample size was relatively small, limiting the generalizability of our results. In addition, the HCPs who had never used ChatGPT or were not planning to use it were excluded, which biased the results. The decision was made to exclude such participants because we observed in the pilot survey that those who had never used ChatGPT or were reluctant to use it were unable to answer or understand some questions. This may be because those who designed and evaluated the survey had experience or a will to use it. Also, this was a cross-sectional study, preventing the analysis of causality. Further, although SEM analysis was used to explore the relationships between KAP dimensions and other variables, the results must be considered with caution because the SEM analysis was based on prespecified hypotheses and the results were statistically inferred.33,34 Additionally, all KAP studies are subject to social desirability bias.

CONCLUSIONS

HCPs in China willing to use ChatGPT appear to have poor knowledge, positive attitudes, and proactive practices regarding this AI tool. These results might help in the design of appropriate educational interventions for the proper and ethical use of ChatGPT among HCPs. 

Author Affiliations: School of Information and Computer Science, Anhui Agricultural University(YL), Hefei, China; Department of Neurosurgery, First Affiliated Hospital of Anhui Medical University (ZL), Hefei, China.

Source of Funding: None.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (YL); acquisition of data (YL); analysis and interpretation of data (YL); drafting of the manuscript (YL,); critical revision of the manuscript for important intellectual content (ZL); statistical analysis (YL); provision of patients or study materials (YL); administrative, technical, or logistic support (ZL); and supervision (ZL).

Address Correspondence to: Zhongying Li, MS, Department of Neurosurgery, First Affiliated Hospital of Anhui Medical University, Jixi 218, Hefei 230022, China. Email: llzkvermouth@163.com.

REFERENCES

1. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. doi:10.2196/46885

2. Lock S. What is AI chatbot phenomenon ChatGPT and could it replace humans? Guardian. December 5, 2022. Accessed January 16, 2023. https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans

3. Heilweil R. AI is finally good at stuff, and that’s a problem. Vox. December 7, 2022. Accessed January 16, 2023. https://www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai

4. Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313. doi:10.1126/science.adg7879

5. Bushard B. Fake scientific abstracts written by ChatGPT fooled scientists, study finds. Forbes. January 10, 2023. Accessed February 3, 2023. https://www.forbes.com/sites/brianbushard/2023/01/10/fake-scientific-abstracts-written-by-chatgpt-fooled-scientists-study-finds/

6. Biswas S. ChatGPT and the future of medical writing. Radiology. 2023;307(2):e223312. doi:10.1148/radiol.223312

7. Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423. doi:10.1038/d41586-023-00056-7

8. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613(7945):620-621. doi:10.1038/d41586-023-00107-z

9. Brainard J. As scientists explore AI-written text, journals hammer out policies. Science. February 22, 2023. Accessed February 22, 2023. https://www.science.org/content/article/scientists-explore-ai-written-text-journals-hammer-policies

10. Hosseini M, Horbach SPJM. Fighting reviewer fatigue or amplifying bias? considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res Integr Peer Rev. 2023;8(1):4. doi:10.1186/s41073-023-00133-5

11. Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023;15(2):e35179. doi:10.7759/cureus.35179

12. Athaluri SA, Manthena SV, Kesapragada VSR, Yarlagadda V, Dave T, Duddumpudi RTS. Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus. 2023;15(4):e37432. doi:10.7759/cureus.37432

13. Das D, Kumar N, Longjam LA, et al. Assessing the capability of ChatGPT in answering first- and second-order knowledge questions on microbiology as per competency-based medical education curriculum. Cureus. 2023;15(3):e36034. doi:10.7759/cureus.36034

14. Hamed E, Sharif A, Eid A, Alfehaidi A, Alberry M. Advancing artificial intelligence for clinical knowledge retrieval: a case study using ChatGPT-4 and link retrieval plug-in to analyze diabetic ketoacidosis guidelines. Cureus. 2023;15(7):e41916. doi:10.7759/cureus.41916

15. Shay D, Kumar B, Bellamy D, et al. Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions. Br J Anaesth. 2023;131(2):e31-e34. doi:10.1016/j.bja.2023.04.017

16. Chiesa-Estomba CM, Lechien JR, Vaira LA, et al. Exploring the potential of Chat-GPT as a supportive tool for sialendoscopy clinical decision making and patient information support. Eur Arch Otorhinolaryngol. 2024;281(4):2081-2086. doi:10.1007/s00405-023-08104-8

17. Haemmerli J, Sveikata L, Nouri A, et al. ChatGPT in glioma adjuvant therapy decision making: ready to assume the role of a doctor in the tumour board? BMJ Health Care Inform. 2023;30(1):e100775. doi:10.1136/bmjhci-2023-100775

18. Lukac S, Dayan D, Fink V, et al. Evaluating ChatGPT as an adjunct for the multidisciplinary tumor board decision-making in primary breast cancer cases. Arch Gynecol Obstet. 2023:308(6):1831-1844. doi:10.1007/s00404-023-07130-5

19. Monteith S, Glenn T, Geddes JR, Whybrow PC, Achtyes E, Bauer M. Artificial intelligence and increasing misinformation. Br J Psychiatry. 2024;224(2):33-35. doi:10.1192/bjp.2023.136

20. The Lancet Digital Health. ChatGPT: friend or foe? Lancet Digit Health. 2023;5(3):e102. doi:10.1016/S2589-7500(23)00023-7

21. Andrade C, Menon V, Ameen S, Praharaj SK. Designing and conducting knowledge, attitude, and practice surveys in psychiatry: practical guidance. Indian J Psychol Med. 2020;42(5):478-481. doi:10.1177/0253717620946111

22. Advocacy, Communication and Social Mobilization for TB Control: A Guide to Developing Knowledge, Attitude and Practice Surveys. World Health Organization; 2008. Accessed November 22, 2022. https://iris.who.int/bitstream/handle/10665/43790/9789241596176_eng.pdf

23. Mohammed M, Kumar N, Zawiah M, et al. Psychometric properties and assessment of knowledge, attitude, and practice towards ChatGPT in pharmacy practice and education: a study protocol. J Racial Ethn Health Disparities. 2024;11(4):2284-2293. doi:10.1007/s40615-023-01696-1

24. Bloom BS. Data from: Learning for mastery: Instruction and curriculum. Regional Education Laboratory for the Carolinas and Virginia, topical papers and reprints, number 1. Evaluation Comment. 1968:1(2). Education Resources Information Center. Accessed Accessed February 3, 2023. https://eric.ed.gov/?id=ED053419

25. Daraz L, Chang BS, Bouseh S. Inferior: the challenges of gender parity in the artificial intelligence ecosystem-a case for Canada. Front Artif Intell. 2022;5:931182. doi:10.3389/frai.2022.931182

26. Ahmed Z, Bhinder KK, Tariq A, et al. Knowledge, attitude, and practice of artificial intelligence among doctors and medical students in Pakistan: a cross-sectional online survey. Ann Med Surg (Lond). 2022;76:103493. doi:10.1016/j.amsu.2022.103493

27. Temsah MH, Aljamaan F, Malki KH, et al. ChatGPT and the future of digital health: a study on healthcare workers’ perceptions and expectations. Healthcare (Basel). 2023;11(13):1812. doi:10.3390/healthcare11131812

28. Howard A, Hope W, Gerada A. ChatGPT and antimicrobial advice: the end of the consulting infection doctor? Lancet Infect Dis. 2023;23(4):405-406. doi:10.1016/S1473-3099(23)00113-5

29. Alhasan K, Al-Tawfiq J, Aljamaan F, Jamal A, Al-Eyadhy A, Temsah MH. Mitigating the burden of severe pediatric respiratory viruses in the post-COVID-19 era: ChatGPT insights and recommendations. Cureus. 2023;15(3):e36263. doi:10.7759/cureus.36263

30. Alhasan K, Raina R, Jamal A, Temsah MH. Combining human and AI could predict nephrologies future, but should be handled with care. Acta Paediatr. 2023;112(9):1844-1848. doi:10.1111/apa.16867

31. Temsah MH, Jamal A, Al-Tawfiq JA. Reflection with ChatGPT about the excess death after the COVID-19 pandemic. New Microbes New Infect. 2023;52:101103. doi:10.1016/j.nmni.2023.101103

32. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008

33. Beran TN, Violato C. Structural equation modeling in medical research: a primer. BMC Res Notes. 2010;3:267. doi:10.1186/1756-0500-3-267

34. Fan Y, Chen J, Shirkey G. Applications of structural equation modeling (SEM) in ecological studies: an updated review. Ecol Process. 2016;5:19. doi:10.1186/s13717-016-0063-3

Related Videos
Masanori Aikawa, MD
Mei Wei, MD, an oncologist specializing in breast cancer at Huntsman Cancer Institute at the University of Utah.
Screenshot of an interview with Ruben Mesa, MD
Ruben Mesa, MD
Screenshot of Susan Wescott, RPh, MBA
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Screenshot of Stephanie Hsia, PharmD
Screenshot of an interview with Megan Ehret, PharmD
Cesar Davila-Chapa, MD
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo