Skip to main content
The response generated by the language model, ChatGPT, acknowledges certain limitations regarding the generation of academic manuscripts, but it fails to address a critical issue: the lack of lived experience and the inability to incorporate intersectional perspectives into the research process.
LLMs, by design, process and produce text based on patterns and probabilities derived from vast datasets, but they do not have embodied, lived experiences. This is particularly problematic in fields that require deep engagement with the complexities of human identity, lived realities, and social contexts—areas where understanding the intersections of race, gender, class, sexuality, and other social categories is essential. In such fields, knowledge is not just theoretical; it is informed by the personal, social, and historical contexts that shape individuals' experiences and interactions with systems of power.
In academic research, particularly in areas such as social sciences, health disparities, and justice studies, scholars must grapple with how different axes of identity intersect and affect lived experiences. These intersections are not static or easily reducible to data points; they shape how individuals experience privilege, oppression, and marginalization in ways that are nuanced and context-specific. An LLM, however, cannot replicate or understand these intersections because it lacks any capacity for subjective experience or awareness of the sociopolitical dynamics that shape these categories. Without a deep understanding of how identity and systemic power function in particular settings, LLM-generated ideas could easily reproduce reductive or inaccurate representations of complex, lived realities.
Moreover, academic work often involves iterative processes of revision and critique, particularly when challenging normative assumptions or engaging with marginalized voices. In feminist, queer, and decolonial methodologies, for instance, scholars frequently revise their approaches in response to ethical considerations, new perspectives, or evolving understandings of power dynamics within the research. LLMs are not equipped to engage in these processes of critical reflection. They do not have a sense of moral responsibility, nor can they navigate the ethical considerations that arise when conducting research involving vulnerable populations. This absence of reflexivity means that an LLM's contribution to academic writing would lack the ethical depth required for producing meaningful, socially responsible scholarship.
Lastly, the model's response mentions that ChatGPT cannot ensure the accuracy or validity of scientific claims, which is certainly true. However, this limitation is not simply technical; it is deeply tied to the model’s lack of capacity to critically engage with how knowledge production is shaped by human subjectivity, positionality, and ethical responsibility. In academic fields that deal with questions of power, identity, and inequality, human researchers must bring their lived experiences, ethical considerations, and engagement with diverse perspectives into the research process—something that a language model, devoid of agency or experience, is fundamentally incapable of doing.
In conclusion, while ChatGPT can provide some assistance with basic drafting or summarizing existing knowledge, relying on it to generate ideas for academic manuscripts is problematic. The rich, contextualized understanding required for high-quality, ethically sound academic work, particularly in fields that examine human experience through the lens of intersectionality, cannot be replicated by a language model. Engaging with the complexities of identity and power requires not just technical writing skills but an awareness of the social, political, and historical forces that shape both the research process and the experiences of the individuals being studied. These are precisely the kinds of insights that LLMs are unable to provide.