Skip to main content
To the Editor,
I am writing in response to the editorial article "Use of AI in Family Medicine Publications: A Joint Editorial From Journal Editors" (Schrager et al., January 2025), which provides an insightful overview of the expanding role of artificial intelligence (AI) and large language models (LLMs) in academic publishing. The editorial aptly highlights the potential of these technologies to enhance efficiency and streamline various processes, such as manuscript screening, literature review, and peer review. While I share an appreciation for the promising applications of AI within these areas, I wish to address the Future Directions section, which, although forward-thinking, raises significant concerns when viewed through a lens grounded in systems thinking, decolonial ethics, and cultural humility.
From my perspective as a licensed marriage and family therapist with an interdisciplinary approach that integrates humanistic psychology, systems theory, and eco-bio-psycho-social frameworks, I approach AI’s integration into publishing with caution. AI, as it currently stands, remains a product of the global technocratic biopolitical economy—one which commodifies life and personal experience. As such, its applications within family medicine publishing must be carefully scrutinized, ensuring that these tools do not reinforce existing power structures, exacerbate inequality, or obscure the inherently humanistic nature of our work.
While AI can undoubtedly facilitate the organizational demands placed upon family medicine professionals—especially in terms of efficiency—there is a risk that over-reliance on AI in the publishing process could replicate and perpetuate biases inherent in its design. As noted in the editorial, AI’s ability to fabricate information or produce false citations represents a substantial ethical dilemma. This is not merely a technical concern, but one that intersects with larger questions of accountability, authorship, and the commodification of knowledge. AI models, especially those trained on flawed or biased data, risk reinforcing harmful stereotypes and eroding trust within the academic ecosystem—particularly for marginalized communities, including LGBTQIA+ and neurodiverse individuals, who already face significant barriers to participation in research and publication.
Furthermore, the editorial emphasizes the need for transparency in disclosing AI involvement in the research and writing process, which is a critical step toward addressing these concerns. However, I would also argue that merely disclosing AI usage does not go far enough. There must be a concerted effort to establish and enforce ethical guidelines that address the how and why behind AI’s integration into scholarly work. This must be done with an acute awareness of the underlying systems that govern knowledge production, as well as a commitment to cultural humility and intersectionality. AI cannot be a neutral tool—it reflects the biases and assumptions of its creators, and unless we actively address this, we risk perpetuating the very inequities we aim to dismantle.
With this in mind, I suggest the following contingencies for ethically steering AI use in family medicine publishing:
Rigorous Oversight and Verification: AI-generated content, including literature reviews and citations, must be subjected to thorough human oversight. Authors and editors must ensure the accuracy and reliability of all references, particularly those provided by AI, and work to prevent unintentional bias or misinformation.
Ethical Frameworks and Transparency: Journals should develop and adopt comprehensive ethical frameworks that guide AI’s use in research and publishing. This framework should include explicit guidelines for disclosure, ethical considerations regarding the use of AI, and a clear stance on the responsibilities of authors and editors to ensure that AI tools are used in ways that respect human dignity and agency.
Decolonial and Anti-Bias Measures: Given that AI models often reflect and amplify systemic biases, it is essential to incorporate decolonial and anti-bias measures into the development and deployment of these tools. This includes creating mechanisms to detect and mitigate the perpetuation of biased content, particularly in areas where marginalized populations are at risk of being misrepresented or excluded.
Emphasis on Humanism and Relationship-Building: Family medicine, at its core, is about human connection and the recognition of the complex, multi-dimensional nature of health and well-being. AI should be leveraged not to replace this relationship-building, but to enhance it. It must be used in ways that preserve and deepen the humanistic aspects of care—prioritizing empathy, cultural sensitivity, and nuanced understanding.
In conclusion, while I agree with the editorial’s call for further exploration of AI in the academic publishing process, I urge a more critical approach. The integration of AI must be done with a commitment to ensuring that these tools are used not only to increase efficiency but to serve the broader goal of equitable, culturally attuned, and human-centered scholarship. As we advance in this work, we must remain vigilant in addressing the potential harms of AI, ensuring that its role in publishing enhances, rather than diminishes, the integrity and ethical foundations of our field.