Skip to main content

Main menu

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers

User menu

  • My alerts

Search

  • Advanced search
Annals of Family Medicine
  • My alerts
Annals of Family Medicine

Advanced Search

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers
  • Follow annalsfm on Twitter
  • Visit annalsfm on Facebook

Why LLMs Should Not Generate Ideas for Academic Manuscripts: The Irreplaceable Role of Lived Experience and Intersectionality

  • Ezra N. S. Lockhart, Systemic Psychotherapist, Easy Does It Counseling, p.c.
17 November 2024

The response generated by the language model, ChatGPT, acknowledges certain limitations regarding the generation of academic manuscripts, but it fails to address a critical issue: the lack of lived experience and the inability to incorporate intersectional perspectives into the research process.

LLMs, by design, process and produce text based on patterns and probabilities derived from vast datasets, but they do not have embodied, lived experiences. This is particularly problematic in fields that require deep engagement with the complexities of human identity, lived realities, and social contexts—areas where understanding the intersections of race, gender, class, sexuality, and other social categories is essential. In such fields, knowledge is not just theoretical; it is informed by the personal, social, and historical contexts that shape individuals' experiences and interactions with systems of power.

In academic research, particularly in areas such as social sciences, health disparities, and justice studies, scholars must grapple with how different axes of identity intersect and affect lived experiences. These intersections are not static or easily reducible to data points; they shape how individuals experience privilege, oppression, and marginalization in ways that are nuanced and context-specific. An LLM, however, cannot replicate or understand these intersections because it lacks any capacity for subjective experience or awareness of the sociopolitical dynamics that shape these categories. Without a deep understanding of how identity and systemic power function in particular settings, LLM-generated ideas could easily reproduce reductive or inaccurate representations of complex, lived realities.

Moreover, academic work often involves iterative processes of revision and critique, particularly when challenging normative assumptions or engaging with marginalized voices. In feminist, queer, and decolonial methodologies, for instance, scholars frequently revise their approaches in response to ethical considerations, new perspectives, or evolving understandings of power dynamics within the research. LLMs are not equipped to engage in these processes of critical reflection. They do not have a sense of moral responsibility, nor can they navigate the ethical considerations that arise when conducting research involving vulnerable populations. This absence of reflexivity means that an LLM's contribution to academic writing would lack the ethical depth required for producing meaningful, socially responsible scholarship.

Lastly, the model's response mentions that ChatGPT cannot ensure the accuracy or validity of scientific claims, which is certainly true. However, this limitation is not simply technical; it is deeply tied to the model’s lack of capacity to critically engage with how knowledge production is shaped by human subjectivity, positionality, and ethical responsibility. In academic fields that deal with questions of power, identity, and inequality, human researchers must bring their lived experiences, ethical considerations, and engagement with diverse perspectives into the research process—something that a language model, devoid of agency or experience, is fundamentally incapable of doing.

In conclusion, while ChatGPT can provide some assistance with basic drafting or summarizing existing knowledge, relying on it to generate ideas for academic manuscripts is problematic. The rich, contextualized understanding required for high-quality, ethically sound academic work, particularly in fields that examine human experience through the lens of intersectionality, cannot be replicated by a language model. Engaging with the complexities of identity and power requires not just technical writing skills but an awareness of the social, political, and historical forces that shape both the research process and the experiences of the individuals being studied. These are precisely the kinds of insights that LLMs are unable to provide.

Competing Interests: None declared.
See article »

Content

  • Current Issue
  • Past Issues
  • Early Access
  • Plain-Language Summaries
  • Multimedia
  • Podcast
  • Articles by Type
  • Articles by Subject
  • Supplements
  • Calls for Papers

Info for

  • Authors
  • Reviewers
  • Job Seekers
  • Media

Engage

  • E-mail Alerts
  • e-Letters (Comments)
  • RSS
  • Journal Club
  • Submit a Manuscript
  • Subscribe
  • Family Medicine Careers

About

  • About Us
  • Editorial Board & Staff
  • Sponsoring Organizations
  • Copyrights & Permissions
  • Contact Us
  • eLetter/Comments Policy

© 2025 Annals of Family Medicine