Skip to main content

Main menu

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers

User menu

  • My alerts

Search

  • Advanced search
Annals of Family Medicine
  • My alerts
Annals of Family Medicine

Advanced Search

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers
  • Follow annalsfm on Twitter
  • Visit annalsfm on Facebook
EditorialEditorial

Use of AI in Family Medicine Publications: A Joint Editorial From Journal Editors

Sarina Schrager, Dean A. Seehusen, Sumi Sexton, Caroline R. Richardson, Jon Neher, Nicholas Pimlott, Marjorie A. Bowman, José Rodríguez, Christopher P. Morley, Li Li and James Dom Dera
The Annals of Family Medicine January 2025, 23 (1) 1-4; DOI: https://doi.org/10.1370/afm.240575
Sarina Schrager
Department of Family Medicine and Community Health, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
MD, MS
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Dean A. Seehusen
Department of Family and Community Medicine, Medical College of Georgia, Augusta University, Augusta, Georgia
MD, MPH
Roles: Deputy Editor
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sumi Sexton
Georgetown University School of Medicine, Washington, DC
MD
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Caroline R. Richardson
Warren Alpert Medical School, Brown University, Providence, Rhode Island
MD
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jon Neher
Valley Family Medicine Residency Program, Renton, Washington
MD
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nicholas Pimlott
Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada
MD
Roles: Scientific Editor
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marjorie A. Bowman
Veteran’s Health Administration, Washington, DC
MD
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
José Rodríguez
Department of Family and Preventive Medicine, Spencer Fox Eccles Schol of Medicine, University of Utah Health, Salt Lake City, Utah
MD
Roles: Deputy Editor
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christopher P. Morley
Department of Public Health and Preventive Medicine and Family Medicine, SUNY Upstate Medical University, Syracuse, New York
PhD
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Li Li
Department of Family Medicine, University of Virginia, Charlottesville, Virginia
MD, PhD, MPH
Roles: Editor in Chief
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
James Dom Dera
Pioneer Physicians Network, Fairlawn, Ohio
MD
Roles: Medical Editor
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Key words:
  • artificial intelligence
  • Chat GPT
  • family medicine
  • guidelines
  • large language models
  • unified framework

There are multiple guidelines from publishers and organizations on the use of artificial intelligence (AI) in publishing.1-5 However, none are specific to family medicine. Most journals have some basic AI use recommendations for authors, but more explicit direction is needed, as not all AI tools are the same.

As family medicine journal editors, we want to provide a unified statement about AI in academic publishing for authors, editors, publishers, and peer reviewers based on our current understanding of the field. The technology is advancing rapidly. While text generated from early large language models (LLMs) was relatively easy to identify, text generated from newer versions is getting progressively better at imitating human language and more challenging to detect. Our goal is to develop a unified framework for managing AI in family medicine journals. As this is a rapidly evolving environment, we acknowledge that any such framework will need to continue to evolve. However, we also feel it is important to provide some guidance for where we are today.

Definitions: Artificial intelligence is a broad field where computers perform tasks that have historically been thought to require human intelligence. LLMs are a recent breakthrough in AI that allow computers to generate text that seems like it comes from a human. LLMs deal with language generation, while the broader term generative AI can also include AI generated images or figures. Chat GPT is one of the earliest and widely used LLM models, but other companies have developed similar products. LLMs “learn” to do a multifaceted analysis of word sequences in a massive text training database and generate new sequences of words using a complex probability model. The model has a random component, so responses to the exact same prompt submitted multiple times will not be identical. LLMs can generate text that looks like a medical journal article in response to a prompt, but the article’s content may or may not be accurate. LLMs may “confabulate” generating convincing text that includes false information.6,7,8 LLMs do not search the internet for answers to questions. However, they have been paired with search engines in increasingly sophisticated ways. For the rest of this editorial, we will use the broad term AI synonymously with LLMs.

ROLE OF LARGE LANGUAGE MODELS IN ACADEMIC WRITING AND RESEARCH

As LLM tools are updated and authors and researchers become familiar with them, they will undoubtedly become more functional in assisting the research and writing process by improving efficiency and consistency. However, current research on the best use of these tools in publication is still lacking. A systematic review exploring the role of ChatGPT in literature searches found that most articles on the topic are commentaries, blog posts, and editorials, with little peer-reviewed research.9 Some studies have demonstrated benefit in narrowing the scope of literature review when AI tools were applied to large data sets of studies and prompted to evaluate them for inclusion based on the title and abstract. Another paper reported that AI had 70% accuracy in appropriately identifying relevant studies compared with human researchers and may reduce time and provide a less subjective approach to literature review.10,11,12 When used to assist with writing background sections, LLMs’ writing was rated the same if not better than human researchers, but the citations were consistently false in another study.13 LLM models are frequently deficient in providing “real” papers and correctly matching authors to their own papers when generating citations and therefore are at risk of creating fictitious citations that appear convincing despite incorrect information including digital object identifier (DOI) numbers.6,14

Studies evaluating the perceptions of AI use in academic journals and evaluating the strengths and weaknesses of the tools revealed no agreement on how to report the use of AI tools.15 There are many tools; for example, some are used to improve grammar, and others generate content, yet parameters on substantive use versus non-substantive use are lacking. Furthermore, current AI detection tools cannot adequately distinguish use types.15 Reported benefits include reducing workload and the ability to summarize data efficiently, whereas weaknesses include variable accuracy, plagiarism, and deficient application of evidence-based medicine standards.7,16

Guidelines on appropriate AI use exist, such as the “Living Guidelines on the Responsible Use of Generative AI in Research” produced by the European Commission.17 These guidelines include steps for researchers, organizations, and funders. The fundamental principles for researchers are to maintain ultimate responsibility for content; apply AI tools transparently; ensure careful evaluation of privacy, intellectual property, and applicable legislation; continuously learn how best to use AI tools; and refrain from using tools on activities that directly affect other researchers and groups.17 While these are helpful starting points, family medicine publishers can collaborate on best practices for using AI tools and help define substantive reportable use while acknowledging the current limitations of various tools and understanding that they will continue to evolve. Family medicine journals do not have unique AI needs as compared with other journals, but the effort of all the editors to jointly present principles related to AI is a unique model.

Guiding Principles for Using AI in Family Medicine Research and Publishing

For authors

Disclose any use of AI or LLM in the research or writing process and describe how it was used (eg, “I used Chat GPT to reduce the word count of my paper from 2,700 to 2,450”). Standard disclosure statements may be helpful. The JAMA Network (Reporting Use of AI in Research and Scholarly Publication—JAMA Network Guidance, https://jamanetwork.com/journals/jama/fullarticle/2816213) is an example.

Be accountable to ensure their work is original and accurate. For example, when using LLMs to generate text, authors can unwittingly plagiarize existing work. Authors are ultimately responsible for ensuring their work is original.

Understand the limitations of LLMs (eg, erroneous citations).

Be aware of the potential for AI or LLMs to perpetuate bias.

For journals and editorial teams

Explore ways AI can streamline the publication process at various stages.

Develop clear, transparent guidelines for authors and reviewers before using LLMs in publishing.

Do not allow LLMs to be cited as authors on manuscripts.

Develop a method to accurately evaluate the use of LLMs in the writing process (eg, determine plagiarism, assess validity of references, and fact check statements).33

  • AI = artificial intelligence; Chat GPT = chat generative pre-trained transformer; JAMA = Journal of the American Medical Association; LLM = large language models

GUIDANCE FOR USE OF LLMS/AI IN FAMILY MEDICINE PUBLICATIONS

The core principles of scientific publishing will remain essentially unchanged by AI. For example, the criteria for authorship will remain the same. Authors will still be required to be active participants in conceptualizing and producing scientific work; writers and editors of manuscripts, will be held accountable for the product.

Authors must still cite others’ work appropriately when creating their current scientific research. Citing works will likely change over time as AI use in publishing matures. It is impossible to accurately list all sources used to train a given AI product. However, it would be possible to cite where a fact came from or who originated a particular idea. Similarly, authors will still need to ensure that their final draft is sufficiently original that they have not inadvertently plagiarized others’ works.1,18 Authors must be well versed in the existing literature of a given field.

IMPACT ON DEI EFFORTS

Since LLMs model text generation on a training data set, there is an inherent concern that they will discover biased arguments and then repeat them, thereby compounding bias.19 Because LLMs mimic human-created content, and there is a preponderance of biased, sexist, racist, and other discriminatory content on the internet, this is a significant risk.20 Some companies now work in the LLM/AI space to eliminate biases from these models, but they are in their infancy. Equality AI, for example, is developing “responsible AI to solve health care’s most challenging problems: inequity, bias and unfairness.”21 More investment is necessary to further remove bias from LLM/AI models. While authors have touted AI and LLMs as bias elimination tools, the fact that the results of bias elimination tools are not reproducible with any consistency has scholars questioning their utility. Successful deployment of an unbiased LLM/AI tool will depend on carefully examining and revising existing algorithms and the data used to train them.22 Excellent, unbiased algorithms have not been developed but might be in the future.23 AI tools can be used as a de facto editorial assistant which may help globalize the publication process by helping non-native English speakers publish in English-language journals.

FUTURE DIRECTIONS

The use of LLMs and broader AI tools is expanding rapidly. There are opportunities at all levels of research, writing and publishing to use AI to enhance our work. A key goal for all family medicine journals is to require authors to identify the use of LLMs and assure that the LLMs used provide highly accurate information and mitigate the frequency of confabulation. Research is ongoing to develop methods to determine the accuracy of LLMs output.24 Editors and publishers must continue to advocate for accurate tools to validate the work of LLMs. Researchers should assess the performance of tools that are used in the writing process. For example, they should study the extent to which LLMs plagiarize, provide false citations, or generate false statements. They should also study tools that detect these events.

AI tools are already being used by some publishers and editors to do initial screens of manuscripts and to match potential reviewers with submitted papers. The complex interplay between AI tools and humans is evolving.25 While AI will likely not replace human researchers, authors, reviewers or editors, it continues to contribute to the publication process in myriad ways. We want to know more: “How can LLMs contribute to the publication process?” “Can authors ask LLMs to do literature searches or draft a paper?” “Can we train AI to contribute to a revision of a or review a paper?” Probably yes, but we must scrutinize any AI-generated references and we likely cannot train AI to evaluate conclusions or determine impact of a specific paper in the field. Family medicine journals are publishing important papers on AI—not only about its use in research and publishing but also about its use in clinical practice26-32 and this editorial is a call for more scholarship in this area.

Acknowledgments

We would like to acknowledge Dan Parente, Steven Lin, Winston Liaw, Renee Crichlow, Octavia Amaechi, Brandi White, and Sam Grammer for their helpful suggestions.

Footnotes

  • Annals Early Access article

  • Conflicts of interest: authors report none.

  • Read or post commentaries in response to this article.

  • Received for publication November 19, 2024.
  • Accepted for publication November 19, 2024.
  • © 2025 Annals of Family Medicine, Inc.

References

  1. 1.
    1. Zielinski C,
    2. Winker MA,
    3. Aggarwal R, et al.
    Chatbots, generative AI, and scholarly manuscripts WAME recommendations on chatbots and generative artificialS intelligence in relation to scholarly publications. World Association of Medical Editors. Published Jan 20, 2023. Revised May 31, 2023. Accessed Oct 3, 2024. https://wame.org/page3.php?id=106
  2. 2.
    Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. International Committee of Medical Journal Editors. Updated Jan 2024. Accessed Oct 3, 2024. https://www.icmje.org/icmje-recommendations.pdf
  3. 3.
    1. Adams L,
    2. Fontaine E,
    3. Lin S,
    4. Crowell T,
    5. Chung VCH,
    6. Gonzalez AA
    , eds. Artificial intelligence in health, health care, and biomedical science: an AI code of conduct principles and commitments discussion draft. National Academy of Medicine. Published April 8, 2024. doi: 10.31478/202403a
  4. 4.
    COPE position statement - authorship and AI tools. Committee on Publication Ethics. Published Feb 13, 2023. Accessed Oct 3, 2024. https://publicationethics.org/cope-position-statements/ai-author
  5. 5.
    Author instructions New England Journal of Medicine, Editorial Policies. Accessed Oct 3, 2024. https://www.nejm.org/about-nejm/editorial-policies
  6. 6.
    1. Haider J,
    2. Söderström K,
    3. Ekström B,
    4. Rödl M.
    GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School Misinformation Review. 2024;5(5). doi: 10.37016/mr-2020-156
  7. 7.
    1. Ramoni D,
    2. Sgura C,
    3. Liberale L,
    4. Montecucco F,
    5. Ioannidis JPA,
    6. Carbone F.
    Artificial intelligence in scientific medical writing: legitimate and deceptive uses and ethical concerns. Eur J Intern Med. 2024;127:31-35. doi: 10.1016/j.ejim.2024.07.012
  8. 8.
    1. Brender TD.
    Chatbot confabulations are not hallucinations-reply. JAMA Intern Med. 2023;183(10):1177-1178. doi: 10.1001/jamainternmed.2023.3875
  9. 9.
    1. Parisi V,
    2. Sutton A.
    The role of ChatGPT in developing systematic literature searches: an evidence summary. J Eur Assoc Health Inf Libr. 2024;20(2):30-34. doi: 10.32384/jeahil20623
  10. 10.
    1. Guarda T,
    2. Portela F,
    3. Diaz-Nafria JM
    1. Zimmermann R,
    2. Staab M,
    3. Nasseri M,
    4. Brandtner P.
    Leveraging large language models for literature review tasks - a case study using chatgpt. In: Guarda T, Portela F, Diaz-Nafria JM, eds. Advanced Research in Technologies, Information, Innovation and Sustainability. ARTIIS 2023. Communications in Computer and Information Science. Vol 1935. Springer; 2024. doi: 10.1007/978-3-031-48858-0_25
  11. 11.
    1. Dennstädt F,
    2. Zink J,
    3. Putora PM,
    4. Hastings J,
    5. Cihoric N.
    Title and abstract screening for literature reviews using large language models: an exploratory study in the biomedical domain. Syst Rev. 2024;13(1):158. doi: 10.1186/s13643-024-02575-4
  12. 12.
    1. Guo E,
    2. Gupta M,
    3. Deng J,
    4. Park YJ,
    5. Paget M,
    6. Naugler C.
    Automated paper screening for clinical reviews using large language models: data analysis study. J Med Internet Res. 2024;26:e48996. doi: 10.2196/48996
  13. 13.
    1. Huespe IA,
    2. Echeverri J,
    3. Khalid A, et al.
    Clinical research with large language models generated writing-clinical research with AI-assisted writing (CRAW) Study. Crit Care Explor. 2023;5(10):e0975. doi: 10.1097/CCE.0000000000000975
  14. 14.
    1. Byun C,
    2. Vasicek P,
    3. Seppi K.
    This Reference Does Not Exist: An Exploration of LLM Citation Accuracy and Relevance. In: Proceedings of the Third Workshop on Bridging Human–Computer Interaction and Natural Language Processing. Association for Computational Linguistics; 2024:28–39.
  15. 15.
    1. Chemaya N,
    2. Martin D.
    Perceptions and detection of AI use in manuscript preparation for academic journals. PLoS One. 2024;19(7):e0304807. doi: 10.1371/journal.pone.0304807
  16. 16.
    1. Gödde D,
    2. Nöhl S,
    3. Wolf C, et al.
    A SWOT (strengths, weaknesses, opportunities, and threats) analysis of ChatGPT in the medical literature: concise review. J Med Internet Res. 2023;25:e49368. doi: 10.2196/49368
  17. 17.
    1. Directorate-General for Research and Innovation
    . Living guidelines on the responsible use of generative AI in research - from the European Commission. European Commission; March 2024. Accessed Oct 3, 2024. https://research-and-innovation.ec.europa.eu/document/download/2b6cf7e5-36ac-41cb-aab50d32050143dc_en?filename=ec_rtd_ai-guidelines.pdf
  18. 18.
    1. Kaebnick GE,
    2. Magnus DC,
    3. Kao A, et al.
    Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing. Med Health Care Philos. 2023;26(4):499-503. doi: 10.1007/s11019-023-10176-6
  19. 19.
    1. Saguil A.
    Chatbots and large language models in family medicine. Am Fam Physician. 2024;109(6):501-502.
  20. 20.
    1. Andrew A.
    Potential applications and implications of large language models in primary care. Fam Med Community Health. 2024;12(Suppl 1):e002602. doi: 10.1136/fmch-2023-002602
  21. 21.
    Our mission is to work with companies, policy makers and experts to reduce bias in our AI. EqualAI. Accessed Oct 3, 2024. https://www.equalai.org/about-us/mission/
  22. 22.
    1. Manyika J,
    2. Silberg J,
    3. Presten B.
    What do we do about the biases in AI? Harvard Business Review; 2019. Accessed Oct 3, 2024. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  23. 23.
    About. Algorithmic Justice League. Accessed Oct 3, 2024. https://www.ajl.org/about
  24. 24.
    1. Elkhatat AM,
    2. Elsaid K,
    3. Almeer S.
    Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int J Educ Integr. 2023;19(1):17. doi: 10.1007/s40979-023-00140-5
  25. 25.
    1. Ranjbari D,
    2. Abbasgholizadeh Rahimi S.
    Implications of conscious AI in primary healthcare. Fam Med Community Health. 2024;12(Suppl 1):e002625. doi: 10.1136/fmch-2023-002625
  26. 26.
    1. Parente DJ.
    Generative artificial intelligence and large language models in primary care medical education. Fam Med. 2024;56(9):534-540. doi: 10.22454/FamMed.2024.775525
  27. 27.
    1. Waldren SE.
    The promise and pitfalls of AI in primary care. Fam Pract Manag. 2024;31(2):27-31.
  28. 28.
    1. Hanna K,
    2. Chartash D,
    3. Liaw W, et al.
    Family medicine must prepare for artificial intelligence. J Am Board Fam Med. 2024;37(4):520-524. doi: 10.3122/jabfm.2023.230360R1
  29. 29.
    1. Hanna K.
    Exploring the applications of ChatGPT in family medicine education: five innovative ways for faculty integration. PRiMER. 2023;7:26. doi: 10.22454/PRiMER.2023.985351
  30. 30.
    1. Kueper JK,
    2. Terry AL,
    3. Zwarenstein M,
    4. Lizotte DJ.
    Artificial intelligence and primary care research: a scoping review. Ann Fam Med. 2020;18(3):250-258. doi: 10.1370/afm.2518
  31. 31.
    1. Hake J,
    2. Crowley M,
    3. Coy A, et al.
    Quality, accuracy, and bias in ChatGPT-based summarization of medical abstracts. Ann Fam Med. 2024;22(2):113-120. doi: 10.1370/afm.3075
  32. 32.
    1. Kueper JK,
    2. Emu M,
    3. Banbury M, et al.
    Artificial intelligence for family medicine research in Canada: current state and future directions: report of the CFPC AI Working Group. Can Fam Physician. 2024;70(3):161-168. doi: 10.46747/cfp.7003161
  33. 33.
    1. Sheng E,
    2. Chang K-W,
    3. Natarajan P,
    4. Peng N.
    The woman worked as a babysitter: on biases in language generation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics; 2019: 3407–3412. doi: 10.18653/v1/D19-1339

Content

  • Current Issue
  • Past Issues
  • Early Access
  • Plain-Language Summaries
  • Multimedia
  • Podcast
  • Articles by Type
  • Articles by Subject
  • Supplements
  • Calls for Papers

Info for

  • Authors
  • Reviewers
  • Job Seekers
  • Media

Engage

  • E-mail Alerts
  • e-Letters (Comments)
  • RSS
  • Journal Club
  • Submit a Manuscript
  • Subscribe
  • Family Medicine Careers

About

  • About Us
  • Editorial Board & Staff
  • Sponsoring Organizations
  • Copyrights & Permissions
  • Contact Us
  • eLetter/Comments Policy

© 2025 Annals of Family Medicine