Published eLetters
If you would like to comment on this article, click on Submit a Response to This article, below. We welcome your input.
Jump to comment:
- Page navigation anchor for The Promise of AI: Enhancing Care, Equity, and Physician-Patient InteractionThe Promise of AI: Enhancing Care, Equity, and Physician-Patient Interaction
José E. Rodríguez and Yves Lussier’s article The AI Moonshot: What We Need and What We Do Not presents an informed perspective on the potential benefits of artificial intelligence (AI) within family medicine, while also highlighting the pitfalls of overcomplicating or misdirecting AI development. In responding to their work, I draw upon a broader framework that integrates humanistic psychology, systems thinking, and decolonial ethics. These frameworks allow for a more nuanced understanding of how AI can be deployed meaningfully in healthcare—ensuring that it fosters relationships, respects cultural contexts, and addresses systemic inequities rather than reinforcing them.
1. AI as a Tool for Reducing Administrative Burden and Enhancing Efficiency
AI has significant potential to streamline the administrative aspects of healthcare, particularly through the optimization of electronic health records (EHRs) and related systems. As the authors highlight, current EHR systems are fragmented and redundant, imposing cognitive and procedural burdens on physicians. This inefficiency distracts healthcare providers from their core role: engaging meaningfully with patients.Reducing Redundancy: AI can reduce the time physicians spend navigating these fragmented records by organizing and consolidating data. AI could automate tasks such as note-taking, reducing the time spent on documentation and enabling physicians to engage more fully with their patients.
Facili...
Show MoreCompeting Interests: None declared. - Page navigation anchor for The Hidden Risk of AI Hallucinations in Medical PracticeThe Hidden Risk of AI Hallucinations in Medical Practice
Dear Editor,
We write this letter as a Spanish family physician and an independent researcher, both deeply engaged in the ongoing debates surrounding artificial intelligence (AI) in medical practice (1,2). Our shared perspective arises from years of clinical work in primary care settings, coupled with a commitment to exploring how technological innovations can best serve both patients and professionals. Recent advances in Large Language Models (LLMs), such as ChatGPT or Gemini, have ignited our enthusiasm for the transformation they might bring to healthcare, while at the same time raising serious questions about the reliability and potential pitfalls of such systems (3,4), especially as they become integrated into sensitive tasks like medical diagnosis and treatment planning.
An essential point of concern that has recently garnered much attention is the phenomenon commonly termed “hallucinations”. These hallucinations occur when an LLM generates output that appears plausible but is factually incorrect or entirely fabricated. In the legal sphere, the starkest examples have been instances in which judges have confronted fictional citations or misapplied case law, all traced back to LLM-generated documents (5). However, the medical domain faces a different and potentially graver danger. A lawyer citing a fabricated case usually triggers an alarm that can be checked in legal databases (6). But for a busy family physician working in a demanding clinic, a subtle m...
Show MoreCompeting Interests: None declared.