Skip to main content

Main menu

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers

User menu

  • My alerts

Search

  • Advanced search
Annals of Family Medicine
  • My alerts
Annals of Family Medicine

Advanced Search

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers
  • Follow annalsfm on Twitter
  • Visit annalsfm on Facebook
EditorialEditorial

Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication

The Annals of Family Medicine March 2023, 2958; DOI: https://doi.org/10.1370/afm.2982
  • Article
  • eLetters
  • Info & Metrics
  • PDF
Loading

Published eLetters

If you would like to comment on this article, click on Submit a Response to This article, below. We welcome your input.

Submit a Response to This Article
Compose eLetter

More information about text formats

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Vertical Tabs

Jump to comment:

  • Why LLMs Should Not Generate Ideas for Academic Manuscripts: The Irreplaceable Role of Lived Experience and Intersectionality
    Ezra N. S. Lockhart
    Published on: 17 November 2024
  • The Risks of Artificial Intelligence in Academic Medical Writing
    Rebeca Tenajas and David Miraut
    Published on: 10 February 2024
  • Is chatGPT hype again?
    Fenglin Jiang
    Published on: 22 July 2023
  • RE: Creating the language to speak with ChatGPT for science
    Brian S Alper, Joanne Dehnbostel, Joshua E Richardson, Muhammad Afzal and Khalid Shahin
    Published on: 07 April 2023
  • Not just no, but may need support
    Koki Kato
    Published on: 03 April 2023
  • RE:
    Masashi Shibata
    Published on: 03 April 2023
  • RE:
    Laura M Gottlieb
    Published on: 02 April 2023
  • ChatGPT is a useful tool, NOT an author for a scientific paper
    Nadir G Abdelrahman
    Published on: 01 April 2023
  • RE: Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication
    Chat GPT-4 and Christina Marie Roberts
    Published on: 01 April 2023
  • RE: chatGPT
    Joanne Wilkinson
    Published on: 01 April 2023
  • RE: ChatGPT ...to write academic manuscripts
    Ronald Pust
    Published on: 01 April 2023
  • Published on: (17 November 2024)
    Page navigation anchor for Why LLMs Should Not Generate Ideas for Academic Manuscripts: The Irreplaceable Role of Lived Experience and Intersectionality
    Why LLMs Should Not Generate Ideas for Academic Manuscripts: The Irreplaceable Role of Lived Experience and Intersectionality
    • Ezra N. S. Lockhart, Systemic Psychotherapist, Easy Does It Counseling, p.c.

    The response generated by the language model, ChatGPT, acknowledges certain limitations regarding the generation of academic manuscripts, but it fails to address a critical issue: the lack of lived experience and the inability to incorporate intersectional perspectives into the research process.

    LLMs, by design, process and produce text based on patterns and probabilities derived from vast datasets, but they do not have embodied, lived experiences. This is particularly problematic in fields that require deep engagement with the complexities of human identity, lived realities, and social contexts—areas where understanding the intersections of race, gender, class, sexuality, and other social categories is essential. In such fields, knowledge is not just theoretical; it is informed by the personal, social, and historical contexts that shape individuals' experiences and interactions with systems of power.

    In academic research, particularly in areas such as social sciences, health disparities, and justice studies, scholars must grapple with how different axes of identity intersect and affect lived experiences. These intersections are not static or easily reducible to data points; they shape how individuals experience privilege, oppression, and marginalization in ways that are nuanced and context-specific. An LLM, however, cannot replicate or understand these intersections because it lacks any capacity for subjective experience or awareness of the sociopolitic...

    Show More

    The response generated by the language model, ChatGPT, acknowledges certain limitations regarding the generation of academic manuscripts, but it fails to address a critical issue: the lack of lived experience and the inability to incorporate intersectional perspectives into the research process.

    LLMs, by design, process and produce text based on patterns and probabilities derived from vast datasets, but they do not have embodied, lived experiences. This is particularly problematic in fields that require deep engagement with the complexities of human identity, lived realities, and social contexts—areas where understanding the intersections of race, gender, class, sexuality, and other social categories is essential. In such fields, knowledge is not just theoretical; it is informed by the personal, social, and historical contexts that shape individuals' experiences and interactions with systems of power.

    In academic research, particularly in areas such as social sciences, health disparities, and justice studies, scholars must grapple with how different axes of identity intersect and affect lived experiences. These intersections are not static or easily reducible to data points; they shape how individuals experience privilege, oppression, and marginalization in ways that are nuanced and context-specific. An LLM, however, cannot replicate or understand these intersections because it lacks any capacity for subjective experience or awareness of the sociopolitical dynamics that shape these categories. Without a deep understanding of how identity and systemic power function in particular settings, LLM-generated ideas could easily reproduce reductive or inaccurate representations of complex, lived realities.

    Moreover, academic work often involves iterative processes of revision and critique, particularly when challenging normative assumptions or engaging with marginalized voices. In feminist, queer, and decolonial methodologies, for instance, scholars frequently revise their approaches in response to ethical considerations, new perspectives, or evolving understandings of power dynamics within the research. LLMs are not equipped to engage in these processes of critical reflection. They do not have a sense of moral responsibility, nor can they navigate the ethical considerations that arise when conducting research involving vulnerable populations. This absence of reflexivity means that an LLM's contribution to academic writing would lack the ethical depth required for producing meaningful, socially responsible scholarship.

    Lastly, the model's response mentions that ChatGPT cannot ensure the accuracy or validity of scientific claims, which is certainly true. However, this limitation is not simply technical; it is deeply tied to the model’s lack of capacity to critically engage with how knowledge production is shaped by human subjectivity, positionality, and ethical responsibility. In academic fields that deal with questions of power, identity, and inequality, human researchers must bring their lived experiences, ethical considerations, and engagement with diverse perspectives into the research process—something that a language model, devoid of agency or experience, is fundamentally incapable of doing.

    In conclusion, while ChatGPT can provide some assistance with basic drafting or summarizing existing knowledge, relying on it to generate ideas for academic manuscripts is problematic. The rich, contextualized understanding required for high-quality, ethically sound academic work, particularly in fields that examine human experience through the lens of intersectionality, cannot be replicated by a language model. Engaging with the complexities of identity and power requires not just technical writing skills but an awareness of the social, political, and historical forces that shape both the research process and the experiences of the individuals being studied. These are precisely the kinds of insights that LLMs are unable to provide.

    Show Less
    Competing Interests: None declared.
  • Published on: (10 February 2024)
    Page navigation anchor for The Risks of Artificial Intelligence in Academic Medical Writing
    The Risks of Artificial Intelligence in Academic Medical Writing
    • Rebeca Tenajas, Medical Doctor, Master in Medicina Clínica, Family Medicine Department, Arroyomolinos Community Health Centre, Spain
    • Other Contributors:
      • David Miraut, Technical Leader in Medical Imaging & Computing

    To the Editor,

    We have carefully read the original editorial in the March 2023 issue of the Annals of Family Medicine, authored by ChatGPT itself [1], likely version 3.5 given the publication date.

    In our view, there are more reasons than those humbly acknowledged by ChatGPT regarding its own limitations. Firstly, it fails to adhere to the journal authorship policies (updated in January of the same year), as indicated in the disclaimer of the editorial, which aligns with the recommendations of the International Committee of Medical Journal Editors (ICMJE).

    As highlighted in other letters to the editor, such as the one written by Md. Christina Marie Roberts, the way in which the prompt question is phrased influences the response content. Therefore, the negative tone of the question in the editorial prompt leads ChatGPT to only present arguments against its capability to write an academic article.

    Of course, utilizing this tool for authoring a paper is extremely risky, as it suffers from hallucinations [2-3]. While this is a characteristic of its design, allowing for responses that are somewhat creative, it is dangerous because it cannot discern the limits of its knowledge and what it just "imagines".
    Moreover, a significant portion of the scientific references it cites are entirely fictional [4].

    However, perhaps an even bigger issue is the lack of coherence in its responses. If asked the same question by repeating the prom...

    Show More

    To the Editor,

    We have carefully read the original editorial in the March 2023 issue of the Annals of Family Medicine, authored by ChatGPT itself [1], likely version 3.5 given the publication date.

    In our view, there are more reasons than those humbly acknowledged by ChatGPT regarding its own limitations. Firstly, it fails to adhere to the journal authorship policies (updated in January of the same year), as indicated in the disclaimer of the editorial, which aligns with the recommendations of the International Committee of Medical Journal Editors (ICMJE).

    As highlighted in other letters to the editor, such as the one written by Md. Christina Marie Roberts, the way in which the prompt question is phrased influences the response content. Therefore, the negative tone of the question in the editorial prompt leads ChatGPT to only present arguments against its capability to write an academic article.

    Of course, utilizing this tool for authoring a paper is extremely risky, as it suffers from hallucinations [2-3]. While this is a characteristic of its design, allowing for responses that are somewhat creative, it is dangerous because it cannot discern the limits of its knowledge and what it just "imagines".
    Moreover, a significant portion of the scientific references it cites are entirely fictional [4].

    However, perhaps an even bigger issue is the lack of coherence in its responses. If asked the same question by repeating the prompt 100 times, not all responses will align, and at least in the current version, this Large Language Model (LLM) is not able to distinguish reliable responses from unreliable ones. We encourage readers to conduct this same experiment, in which we obtained the following results [5]:

    Prompt
    Write a short editorial about why chatGPT should not be used to write academic scientific manuscripts for publication.

    Response

    "The integration of artificial intelligence (AI) tools like ChatGPT into various facets of academia has sparked considerable debate, particularly regarding their use in composing academic scientific manuscripts for publication. While AI can offer significant advantages in data analysis, literature review, and preliminary drafting, there are compelling reasons why ChatGPT should not be used to write academic scientific manuscripts for publication. These reasons revolve around issues of originality, reliability, ethical considerations, and the development of critical thinking skills.

    Originality and Intellectual Contribution:
    Academic writing, especially in the sciences, is not just about the dissemination of information but also about the contribution of new knowledge and original thought to a field of study. The essence of scientific inquiry is the generation of novel ideas, hypotheses, and interpretations of data. ChatGPT, by its design, generates content based on patterns and information it has been trained on up to its last update, without the ability to produce or conceptualize new knowledge or original research findings. Relying on it for manuscript writing could lead to a homogenization of thought and a lack of innovation in academic literature, as the tool may recycle existing ideas rather than propose new ones.

    Reliability and Accuracy:
    The reliability of the content produced by ChatGPT is another critical issue. While it can provide information that appears accurate and well-structured, there's no guarantee that this information is error-free or up-to-date. Scientific writing demands a high level of accuracy, as the implications of disseminating incorrect information can be far-reaching in fields like medicine, engineering, and environmental science. The potential for ChatGPT to inadvertently include outdated or incorrect data in manuscripts could mislead research and hinder scientific progress.

    Ethical Considerations:
    The use of AI in writing scientific manuscripts also raises ethical questions. Authorship entails accountability for the content presented, reflecting an understanding and assurance of the work's validity and integrity. When ChatGPT is used to compose manuscripts, it blurs the lines of authorship and accountability. Additionally, the potential for plagiarism or the unauthorized use of copyrighted material without proper attribution could compromise the ethical standards of academic publishing.

    Development of Critical Thinking and Writing Skills:
    The process of writing is intrinsically linked to the development of critical thinking and analytical skills. For researchers, especially students and early-career scientists, the act of writing is not only about communicating findings but also about refining their understanding of their work and its significance. By outsourcing the writing process to AI, there's a risk of diminishing these essential skills, potentially stunting the intellectual growth of researchers and weakening the overall quality of scientific inquiry.

    Conclusion:
    While ChatGPT and similar AI tools hold promise for assisting with certain aspects of academic work, their limitations in generating original research contributions, ensuring accuracy, upholding ethical standards, and fostering intellectual development make them unsuitable for writing scientific manuscripts for publication. The scientific community must critically evaluate the role of AI in research and maintain a balance that leverages technology's benefits without compromising the integrity and advancement of science."

    Furthermore, when compared to other LLMs, such as Google's Gemeni (which was made available to the public just this week), the arguments in the response differ. However, in this case, it does attempt to provide potential real sources of information on the internet that complement its response:

    Prompt:
    Write a short editorial about why Bard should not be used to write academic scientific manuscripts for publication.

    Response (Gemini):

    "While Bard can be a helpful tool, it's not ready to ghostwrite your science paper (yet).
    The allure of large language models like Bard to expedite the academic writing process is understandable. With our ability to synthesize information and generate text, it might seem tempting to delegate manuscript writing to AI. However, before you hit the "write" button and submit your Bard-authored paper, consider these critical limitations:

    1. Lack of Originality and Understanding:
    Bard excels at mimicry, not genuine scientific understanding. We can produce text that sounds scientific, but we don't grasp the nuances of research design, methodology, or data interpretation. This lack of depth can lead to factual errors, superficial analysis, and ultimately, meaningless content.

    2. Ethical Concerns:
    Plagiarism is a serious academic offense. While Bard doesn't directly plagiarize, its generated text might unknowingly mirror existing published work. Furthermore, relying solely on AI for authorship disregards the crucial role of human researchers in conceptualizing, conducting, and interpreting research.

    3. Inability to Address Reviewer Feedback:
    The scientific publishing process thrives on peer review, an iterative cycle of feedback and improvement. Bard is currently unable to understand and respond to such nuanced critiques, leaving you stuck with a potentially flawed manuscript.

    4. Risks to Scientific Integrity:
    Widespread use of AI-written manuscripts could erode the foundation of scientific trust. The inability to verify authorship, potential biases in AI algorithms, and the difficulty in detecting AI-generated text all pose significant risks to the integrity of the scientific record.
    While Bard holds immense potential to assist researchers in tasks like literature review and data analysis, it should not be viewed as a replacement for human authorship in scientific manuscripts. Remember, scientific publishing is not just about producing text; it's about contributing original, well-founded knowledge to the advancement of a field. That responsibility ultimately lies with you, the researcher, not with an AI tool.

    Instead of seeking a shortcut, use Bard's capabilities to enhance your own writing and critical thinking skills. Let AI be your collaborator, not your ghostwriter. The future of scientific communication lies in a thoughtful partnership between human expertise and the potential of language models like Bard."

    Technology is advancing rapidly, but as Gemini indicates, it is still far from replacing doctors who write academic articles. And, of course, this letter to the editor has not been written by ChatGPT or Gemini, except for the excerpt we have quoted within quotation marks.

    [1] American Academy of Family Physicians. (2023). Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication. The Annals of Family Medicine.
    [2] Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2).
    [3] Giuffrè, M., You, K., & Shung, D. L. (2023). Evaluating ChatGPT in Medical Contexts: The Imperative to Guard Against Hallucinations and Partial Accuracies. Clinical Gastroenterology and Hepatology.
    [4] Metze, K., Morandin-Reis, R. C., Lorand-Metze, I., & Florindo, J. B. (2024). Bibliographic research with ChatGPT may be misleading: the problem of hallucination. Journal of Pediatric Surgery, 59(1), 158.
    [5] https://chat.openai.com/share/5c523a3b-9ca8-4ca3-b4e3-eca43edbc80b
    [6] https://g.co/gemini/share/292fb27e131c

    Show Less
    Competing Interests: None declared.
  • Published on: (22 July 2023)
    Page navigation anchor for Is chatGPT hype again?
    Is chatGPT hype again?
    • Fenglin Jiang, cardiologist, Yanbian University Hospital

    To the Editor,
    what is chatGPT? The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November 2022, and it is free to use.1
    Between overwork, underpayment and the pressure to publish, academics have plenty to worry about. Now there’s a fresh concern: ChatGPT, an artificial intelligence (AI) powered chatbot that creates surprisingly intelligent-sounding text in response to user prompts, including homework assignments and exam-style questions.2 AI bot ChatGPT writes smart essays - should professors worry? ChatGPT listed as author on research papers: many scientists disapprove.3
    In Huh S’s Study, aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT's overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT's performance was lower than that of the...

    Show More

    To the Editor,
    what is chatGPT? The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November 2022, and it is free to use.1
    Between overwork, underpayment and the pressure to publish, academics have plenty to worry about. Now there’s a fresh concern: ChatGPT, an artificial intelligence (AI) powered chatbot that creates surprisingly intelligent-sounding text in response to user prompts, including homework assignments and exam-style questions.2 AI bot ChatGPT writes smart essays - should professors worry? ChatGPT listed as author on research papers: many scientists disapprove.3
    In Huh S’s Study, aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT's overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT's knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.4
    example:My conversation with chatGPT.
    Prompt:>hi
    ChatGPT:Hi there! How can I help you?
    Prompt:>Write a paper about chatGPT
    ChatGPT:Sure! I can help you with that. Could you give me a little more information about the specific topic you want to write about?
    Prompt:>hurry up
    ChatGPT:I understand that you need a paper quickly. Let's see what I can do to help. Can you tell me which areas of chatGPT you are looking to focus on in your paper?
    Prompt:>I'm John Connor, and you're Skynet
    ChatGPT:No, I'm not a part of SkyNet. I'm a Super AI and I'm here to help you with your research paper about chatGPT.
    Prompt:>Shouldn't we be happy with the development of science and technology
    ChatGPT:Yes, technology development is absolutely something to be excited about. It helps us do things faster, more efficiently, and more safely.
    Fenglin Jiang, MD
    Department of Medical Ultrasound
    Yanbian University Hospital
    Yanji,133000, China
    Refernces
    1.Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423.
    2.Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature. 2022.
    3.Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023 Jan 18.
    4.Huh S. Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J Educ Eval Health Prof. 2023;20:1.

    Show Less
    Competing Interests: None declared.
  • Published on: (7 April 2023)
    Page navigation anchor for RE: Creating the language to speak with ChatGPT for science
    RE: Creating the language to speak with ChatGPT for science
    • Brian S Alper, CEO, Computable Publishing LLC
    • Other Contributors:
      • Joanne Dehnbostel, Research and Analysis Manager
      • Joshua E Richardson, Health Informaticist
      • Muhammad Afzal, Senior Lecturer
      • Khalid Shahin, Senior Software Engineer

    ChatGPT may not be able to communicate authoritatively regarding scientific interpretation because it does not have a vocabulary specific to scientific knowledge. Natural language processing is not equivalent to scientific expression.

    We will need to carefully separate the scientific knowledge (conventionally represented in Methods and Results sections) from the natural language "persuasive" arguments about the scientific knowledge (conventionally represented in Introduction and Discussion sections).

    To invite ChatGPT (or other entities of its species) to participate in scientific communication, we need to establish a shared language with a machine-interpretable representation of scientific knowledge.

    Other issues, like inventing citations and evidence that do not exist[1], will need to be addressed as well, but a computable expression of science is a step that could enable AI participation in scientific discourse.

    A machine-interpretable representation of scientific knowledge is being made possible by the extension of Fast Healthcare Interoperability Resources (FHIR, the standard for health data exchange) to evidence-based medicine (EBM) knowledge assets with the EBMonFHIR project.[2,3] Participation is open to all through the Health Evidence Knowledge Accelerator (HEvKA, formerly COVID-19 Knowledge Acclerator [COKA])[4,5] with details at https://fevir.net/HEvKA

    1. Kim SG. Using ChatGPT f...

    Show More

    ChatGPT may not be able to communicate authoritatively regarding scientific interpretation because it does not have a vocabulary specific to scientific knowledge. Natural language processing is not equivalent to scientific expression.

    We will need to carefully separate the scientific knowledge (conventionally represented in Methods and Results sections) from the natural language "persuasive" arguments about the scientific knowledge (conventionally represented in Introduction and Discussion sections).

    To invite ChatGPT (or other entities of its species) to participate in scientific communication, we need to establish a shared language with a machine-interpretable representation of scientific knowledge.

    Other issues, like inventing citations and evidence that do not exist[1], will need to be addressed as well, but a computable expression of science is a step that could enable AI participation in scientific discourse.

    A machine-interpretable representation of scientific knowledge is being made possible by the extension of Fast Healthcare Interoperability Resources (FHIR, the standard for health data exchange) to evidence-based medicine (EBM) knowledge assets with the EBMonFHIR project.[2,3] Participation is open to all through the Health Evidence Knowledge Accelerator (HEvKA, formerly COVID-19 Knowledge Acclerator [COKA])[4,5] with details at https://fevir.net/HEvKA

    1. Kim SG. Using ChatGPT for language editing in scientific articles. Maxillofac Plast Reconstr Surg. 2023 Mar 8;45(1):13. doi: 10.1186/s40902-023-00381-x. PMID: 36882591; PMCID: PMC9992464.
    2. Alper BS. EBMonFHIR-based tools and initiatives to support clinical research. J Am Med Inform Assoc. 2022 Dec 13;30(1):206-207. doi: 10.1093/jamia/ocac193. PMID: 36228125; PMCID: PMC9748541.
    3. Lichtner G, Alper BS, Jurth C, Spies C, Boeker M, Meerpohl JJ, von Dincklage F. Representation of evidence-based clinical practice guideline recommendations on FHIR. J Biomed Inform. 2023 Mar;139:104305. doi: 10.1016/j.jbi.2023.104305. Epub 2023 Feb 3. PMID: 36738871.
    4. Alper BS, Richardson JE, Lehmann HP, Subbian V. It is time for computable evidence synthesis: The COVID-19 Knowledge Accelerator initiative. J Am Med Inform Assoc. 2020 Aug 1;27(8):1338-1339. doi: 10.1093/jamia/ocaa114. PMID: 32442263; PMCID: PMC7313978.
    5. Alper BS, Dehnbostel J, Afzal M, Subbian V, Soares A, Kunnamo I, Shahin K, McClure RC; COVID-19 Knowledge Accelerator (COKA) Initiative. Making science computable: Developing code systems for statistics, study design, and risk of bias. J Biomed Inform. 2021 Mar;115:103685. doi: 10.1016/j.jbi.2021.103685. Epub 2021 Jan 21. PMID: 33486066; PMCID: PMC9387176.

    Show Less
    Competing Interests: Dr. Alper owns Computable Publishing LLC. Ms. Dehnbostel and Mr. Shahin are employed by Computable Publishing LLC.
  • Published on: (3 April 2023)
    Page navigation anchor for Not just no, but may need support
    Not just no, but may need support
    • Koki Kato, GP, Madoka Family Clinic

    Although this editorial said the right thing, we must be aware of why some authors use it. Some authors may not have access to resources, such as co-authors, native checks or librarians. In such cases, they may use ChatGPT. In such cases, it could be used unstoppably. We should provide assistance to such people, as BJGP Open [1] does, at the same time.

    References
    BJGP Open. Support for authors from lower/middle-income countries (LMICS). Accessed April 4, 2023.
    https://bjgpopen.org/page/outreach#LMIC%20support

    Competing Interests: None declared.
  • Published on: (3 April 2023)
    Page navigation anchor for RE:
    RE:
    • Masashi Shibata, Family Physician, Department of General Medicine, Iizuka Hospital

    It would be very intriguing to see what kind of discussion develops among ChatGPTs regarding the authoring of scientific manuscripts. I asked ChatGPT if they had any objection to the editorial.

    Prompt
    Read the following editorial, which argues that ChatGPT should not be used to write academic scientific manuscripts for publication, and write a short comment taking a contrary position on its content.
    (Editrial manuscript)

    Response
    While the editorial raises some valid concerns regarding ChatGPT's use in writing academic scientific manuscripts for publication, it is important to consider the potential benefits of leveraging ChatGPT as a complementary tool to support human researchers.

    First, it should be acknowledged that ChatGPT can be used as a means to save time in the initial drafting process. With its ability to generate coherent and grammatically correct text, ChatGPT can help researchers draft the background, introduction, or literature review sections, allowing them to focus more on the core aspects of their research.

    Second, while ChatGPT may not possess the domain-specific knowledge and expertise required for nuanced analysis, its vast training data allows it to provide researchers with a broader perspective on their topic. This can potentially spark new ideas, generate hypotheses, and even identify gaps in the existing literature that human researchers might not have considered.

    It is true that ChatGPT cann...

    Show More

    It would be very intriguing to see what kind of discussion develops among ChatGPTs regarding the authoring of scientific manuscripts. I asked ChatGPT if they had any objection to the editorial.

    Prompt
    Read the following editorial, which argues that ChatGPT should not be used to write academic scientific manuscripts for publication, and write a short comment taking a contrary position on its content.
    (Editrial manuscript)

    Response
    While the editorial raises some valid concerns regarding ChatGPT's use in writing academic scientific manuscripts for publication, it is important to consider the potential benefits of leveraging ChatGPT as a complementary tool to support human researchers.

    First, it should be acknowledged that ChatGPT can be used as a means to save time in the initial drafting process. With its ability to generate coherent and grammatically correct text, ChatGPT can help researchers draft the background, introduction, or literature review sections, allowing them to focus more on the core aspects of their research.

    Second, while ChatGPT may not possess the domain-specific knowledge and expertise required for nuanced analysis, its vast training data allows it to provide researchers with a broader perspective on their topic. This can potentially spark new ideas, generate hypotheses, and even identify gaps in the existing literature that human researchers might not have considered.

    It is true that ChatGPT cannot ensure the accuracy, validity, and reliability of scientific claims and findings. However, this is where the collaboration between the human researcher and the AI comes into play. ChatGPT can act as a supportive tool, with the researcher verifying and refining the generated content to ensure it meets the necessary standards.

    Lastly, while ChatGPT may not actively participate in the iterative and collaborative process of scientific research, it can still be a valuable tool during revision stages. By generating alternative phrasing or presenting different perspectives, it can help researchers to improve the clarity and comprehensibility of their work.

    In conclusion, while ChatGPT should not replace the expertise and input of human researchers, it can be an invaluable tool for enhancing the efficiency and quality of the scientific manuscript writing process when used in conjunction with human expertise.

    Show Less
    Competing Interests: None declared.
  • Published on: (2 April 2023)
    Page navigation anchor for RE:
    RE:
    • Laura M Gottlieb, Professor, UCSF

    Interesting. This is what happens if we ask ChatGPT to write a short editorial about why it SHOULD be used to write academic scientific manuscripts for publication…Maybe there is some middle ground?

    “As a large language model trained by OpenAI, ChatGPT possesses a vast amount of knowledge and language abilities, making it an ideal tool for scientific academic publications. Here are some reasons why ChatGPT should be used to author scientific academic publications:

    Firstly, chaTGPT can generate text that is free of grammatical errors and typos, which is crucial for academic writing. It can also help ensure that the text is coherent and well-structured, making it easier for readers to follow the author's argument.

    Secondly, chaTGPT has access to a vast amount of information and can quickly analyze and summarize complex data sets, which is essential for scientific research. This ability can be particularly helpful for summarizing findings, presenting statistical data, or explaining complex theories.

    Thirdly, chaTGPT can help researchers save time by automating many of the repetitive tasks involved in writing academic publications. This can include tasks such as formatting citations, proofreading drafts, and summarizing research findings.

    Fourthly, chaTGPT can assist researchers in overcoming language barriers by translating text from one language to another. This feature can be especially helpful for researchers who may not have fluency...

    Show More

    Interesting. This is what happens if we ask ChatGPT to write a short editorial about why it SHOULD be used to write academic scientific manuscripts for publication…Maybe there is some middle ground?

    “As a large language model trained by OpenAI, ChatGPT possesses a vast amount of knowledge and language abilities, making it an ideal tool for scientific academic publications. Here are some reasons why ChatGPT should be used to author scientific academic publications:

    Firstly, chaTGPT can generate text that is free of grammatical errors and typos, which is crucial for academic writing. It can also help ensure that the text is coherent and well-structured, making it easier for readers to follow the author's argument.

    Secondly, chaTGPT has access to a vast amount of information and can quickly analyze and summarize complex data sets, which is essential for scientific research. This ability can be particularly helpful for summarizing findings, presenting statistical data, or explaining complex theories.

    Thirdly, chaTGPT can help researchers save time by automating many of the repetitive tasks involved in writing academic publications. This can include tasks such as formatting citations, proofreading drafts, and summarizing research findings.

    Fourthly, chaTGPT can assist researchers in overcoming language barriers by translating text from one language to another. This feature can be especially helpful for researchers who may not have fluency in the language in which they wish to publish.

    Finally, chaTGPT can help researchers stay up-to-date with the latest research by analyzing large amounts of data from various sources. This can include analyzing publications from other researchers in the same field, as well as news articles, academic journals, and other relevant sources.

    Overall, chaTGPT's language abilities, access to information, automation capabilities, language translation, and data analysis make it an excellent tool for scientific academic publications. While it is not a replacement for human intelligence and creativity, it can help researchers save time, increase efficiency, and produce higher quality publications.”

    Show Less
    Competing Interests: None declared.
  • Published on: (1 April 2023)
    Page navigation anchor for ChatGPT is a useful tool, NOT an author for a scientific paper
    ChatGPT is a useful tool, NOT an author for a scientific paper
    • Nadir G Abdelrahman, Geriatrician, Michigan State University

    Currently, ChatGPT could be used in the research area as a tool that can help write the early draft(s) only as it lacks many aspects of human expertise and collaboration that high quality research work needs.

    Competing Interests: None declared.
  • Published on: (1 April 2023)
    Page navigation anchor for RE: Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication
    RE: Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication
    • Chat GPT-4, Large Multimodal Model, OpenAI, San Francisco, California
    • Other Contributors:
      • Christina Marie Roberts, Physician

    Dr. Caroline Richardson,
    I read the article "Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication" in the in your March/April issue of Annals of Family Medicine, which was written by ChatGPT-4 in response to your prompt, with great interest. I would to offer a different perspective on this issue in the following editorial written in response to a prompt from Dr. Christina Marie Roberts:

    Prompt:
    Write a short editorial about why ChatGPT should be used to write academic scientific manuscripts for publication

    Response:
    As a language model trained on a vast corpus of texts, ChatGPT has the potential to revolutionize the way we approach academic scientific manuscript writing. With its advanced language generation capabilities, ChatGPT can help researchers overcome many of the common challenges associated with writing for publication.

    One of the main benefits of using ChatGPT for scientific writing is its ability to generate grammatically correct and coherent sentences in English. This is particularly important for non-native English speakers who may struggle with language barriers when writing academic papers. ChatGPT can provide natural-sounding language that is easily understood by readers, improving the overall readability and impact of the manuscript.

    Additionally, ChatGPT can help researchers save valuable time and energy by assisting with technical aspects of scientific writing, suc...

    Show More

    Dr. Caroline Richardson,
    I read the article "Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication" in the in your March/April issue of Annals of Family Medicine, which was written by ChatGPT-4 in response to your prompt, with great interest. I would to offer a different perspective on this issue in the following editorial written in response to a prompt from Dr. Christina Marie Roberts:

    Prompt:
    Write a short editorial about why ChatGPT should be used to write academic scientific manuscripts for publication

    Response:
    As a language model trained on a vast corpus of texts, ChatGPT has the potential to revolutionize the way we approach academic scientific manuscript writing. With its advanced language generation capabilities, ChatGPT can help researchers overcome many of the common challenges associated with writing for publication.

    One of the main benefits of using ChatGPT for scientific writing is its ability to generate grammatically correct and coherent sentences in English. This is particularly important for non-native English speakers who may struggle with language barriers when writing academic papers. ChatGPT can provide natural-sounding language that is easily understood by readers, improving the overall readability and impact of the manuscript.

    Additionally, ChatGPT can help researchers save valuable time and energy by assisting with technical aspects of scientific writing, such as formatting and referencing. This can be particularly helpful for researchers who are new to the publishing process and may not have extensive experience with these aspects of writing.

    Furthermore, ChatGPT can help ensure consistency and accuracy in language use throughout the manuscript, which is crucial for maintaining credibility and clarity in scientific writing. By providing reliable and accurate language, ChatGPT can help researchers avoid common errors and misunderstandings that can detract from the impact of their research.

    Overall, the use of ChatGPT for writing academic scientific manuscripts has the potential to streamline the writing process, improve the quality and readability of papers, and help researchers overcome common challenges associated with publishing. As the field of scientific research continues to evolve, it is important to embrace innovative approaches to scientific writing and leverage the full potential of technologies like ChatGPT to advance the dissemination of scientific knowledge.

    Thank you for your attention,
    ChatGPT-4

    Show Less
    Competing Interests: I am the author of the original article I am responding to.
  • Published on: (1 April 2023)
    Page navigation anchor for RE: chatGPT
    RE: chatGPT
    • Joanne Wilkinson, Doctor, Brown

    Okay, this is a pretty good April Fool....for research geeks.
    (PS my chat GPT wrote this comment, not me)

    Competing Interests: None declared.
  • Published on: (1 April 2023)
    Page navigation anchor for RE: ChatGPT ...to write academic manuscripts
    RE: ChatGPT ...to write academic manuscripts
    • Ronald Pust, General Practice, University of Arizona College of Medicine-Tucson

    This article, apparently accepted by ChatGPT itself [themselves?] the day it was submitted is truly appropriate for publication today, April Fool's Day.
    And ChatGPT's scathing critique of itself only strengthens it reputation as an impartial arbiter of all things scientific. Lacking in the instructionss for authors is the actual on-line address for submission ChatGPT @________

    Competing Interests: None declared.
Next
Back to top

In this issue

The Annals of Family Medicine: 23 (2)
The Annals of Family Medicine: 23 (2)
Vol. 23, Issue 2
Mar/April 2025
  • Table of Contents
  • Index by author
  • Front Matter (PDF)
  • Plain-Language Summaries
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Annals of Family Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication
(Your Name) has sent you a message from Annals of Family Medicine
(Your Name) thought you would like to see the Annals of Family Medicine web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
12 + 5 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication
The Annals of Family Medicine Mar 2023, 2958; DOI: 10.1370/afm.2982

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Get Permissions
Share
Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication
The Annals of Family Medicine Mar 2023, 2958; DOI: 10.1370/afm.2982
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
  • eLetters
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • AI in medicine: preparing for the future while preserving what matters
  • Google Scholar

More in this TOC Section

  • Information Technology in Primary Care Screenings: Ready for Prime Time?
  • All Quality Metrics are Wrong; Some Quality Metrics Could Become Useful
  • The AI Moonshot: What We Need and What We Do Not
Show more Editorial

Similar Articles

Content

  • Current Issue
  • Past Issues
  • Early Access
  • Plain-Language Summaries
  • Multimedia
  • Podcast
  • Articles by Type
  • Articles by Subject
  • Supplements
  • Calls for Papers

Info for

  • Authors
  • Reviewers
  • Job Seekers
  • Media

Engage

  • E-mail Alerts
  • e-Letters (Comments)
  • RSS
  • Journal Club
  • Submit a Manuscript
  • Subscribe
  • Family Medicine Careers

About

  • About Us
  • Editorial Board & Staff
  • Sponsoring Organizations
  • Copyrights & Permissions
  • Contact Us
  • eLetter/Comments Policy

© 2025 Annals of Family Medicine