News

Article

How Medical Journals Are Grappling With AI in Peer Review

Author(s):

A new study revealed divergent policies on artificial intelligence (AI) use in peer review among leading medical journals, with confidentiality concerns driving prohibitions.

The rise of artificial intelligence (AI) has introduced both opportunities and challenges to the world of medical publishing, especially in the peer review process.1

According to new research just published in JAMA Network Open, 78% of the top 100 medical journals now provide guidance on AI-assisted peer review. However, these policies vary widely, with most focused on safeguarding confidentiality. The study found that 46 of the top 100 journals explicitly prohibit the use of AI in peer review, while 32 permit limited use under strict conditions, such as requiring reviewers to disclose AI involvement and respecting confidentiality and authorship rights.

Doctor using AI on tablet | Image credit: LALAKA – stock.adobe.com

46 of the 100 journals explicitly prohibit the use of AI in peer review. | Image credit: LALAKA – stock.adobe.com

“Despite GenAI’s potential benefits to enhance review efficiency, concerns remain about its inherent problems, which could lead to biases and confidentiality breaches,” the authors wrote.

Among the journals that offer AI guidance, 91% prohibit uploading manuscript-related content to AI tools, reflecting fears of data leaks or breaches of privacy. Popular AI tools like chatbots and large language models—such as ChatGPT—were mentioned in 47% and 27% of the guidance, respectively.

Why Do Journals’ AI Policies Vary So Much?

AI policies differed across publishers and regions. Publishers like Wiley and Springer Nature allowed limited AI use, while Elsevier and Cell Press maintained stricter prohibitions.

“Internationally based medical journals are more likely to permit limited use than journals’ editorial located in the US or Europe, and mixed publishers had the highest proportion of prohibition on AI use,” the authors said.

Interestingly, 22% of journals linked to external statements from organizations like the International Committee of Medical Journal Editors or the World Association of Medical Editors that permit limited AI use. However, 5 journals’ policies contradicted these statements, highlighting the lack of consensus in the field.

“This divergence in policy may be the ultimate reason for the observed variations in guidance,” the authors noted.

While 32% of journals permit limited AI use, standards for disclosing AI involvement varied, and important areas like innovation, reproducibility, and reference management remain underexplored. Additionally, scattered AI-related guidance creates challenges for reviewers, potentially leading to misuse or confidentiality breaches. Confidentiality was the leading reason for prohibiting AI use, cited by 96% of journals with such policies. Experts suggest that clearer editorials and better adherence to AI usage policies could address these issues effectively.2

While AI is unlikely to replace human peer review, its role is expected to expand as the technology advances.1

“Used safely and ethically, AI can increase productivity and innovation,” the authors said. “Thus, continuous monitoring and regular assessment of AI’s impact are essential for updating guidance, thereby maintaining high-quality peer review.”

As AI continues to evolve, medical journals face the challenge of balancing its benefits with potential risks, ensuring that the peer review process remains both rigorous and ethical. This study offers a snapshot of the current landscape, signaling the need for greater collaboration and standardization in crafting AI-related policies.

References

  1. Li ZQ, Xu HL, Cao HJ, Liu ZL, Fei YT, Liu JP. Use of artificial intelligence in peer review among top 100 medical journals. JAMA Netw Open. 2024;7(12):e2448609. doi:10.1001/jamanetworkopen.2024.48609
  2. Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA. 2023;330(8):702-703. doi:10.1001/jama.2023.12500
Related Videos
Masanori Aikawa, MD
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
Glenn Balasky during a video interview
dr joseph alvarnas
Michael Lynch, MD, UPMC
dr alex jahangir
Fahad Tahir, MAS, MBA, FACHE, Ascension St Thomas
Leland Metheny, MD, University Hospitals Seidman Cancer Center
Andrew Cournoyer
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo