Keywords
artificial intelligence, AI, natural language processing, large language models, AI-generated text
AbstractThis editorial provides an overview of large language models (LLMs), the risks associated with their use, and the challenges involved in using artificial intelligence (AI) to determine the extent to which LLMs have been used to write text, including articles for publication in medical journals. As narratives generated by LLMs become increasingly difficult to distinguish from human writing, concerns have emerged about their impact on scholarly communication, particularly in health and medicine. The medical community is becoming more aware of various tools that can detect AI-generated text; however, adopting these tools comes with unique challenges. The purpose of this article is to provide readers with an understanding of how AI text detectors work, the limitations of these tools, and recommendations for what medical editors, reviewers, and readers can do to navigate these challenges, along with future directions to help safeguard the integrity of scholarly work.
Recommended CitationMudipalli H, Milburn M, Saha S, Fairclough J. Charting truth, trust, and transformers: a critical look at AI text detection and recommendations for medical journals. J Patient Cent Res Rev. 2025;12:208-12.
April 28th, 2025
July 2nd, 2025
Comments (0)