Skip to main content
The editorial by Harris and colleagues underscores the cautious optimism surrounding digital health technologies in primary care. As the study points out, even well-intentioned tools may fail without appropriate implementation, stakeholder engagement, and a nuanced understanding of real-world clinical workflows. Yet, the emerging discourse must now move beyond whether these tools are ready for prime time, to how we ensure they are safe, trusted, and genuinely patient-centered.
The qualitative study by Moschogianis et al. (2025) deepens this conversation by spotlighting patient and staff perceptions of AI in primary care eVisits. While participants recognized potential efficiencies, they also articulated fears-depersonalization, perceived diagnostic inaccuracies, and fundamental misconceptions about AI capabilities. These concerns are far from theoretical.
For example, Lampert et al. (2024) evaluated an FDA-cleared AI model for hypertrophic cardiomyopathy and found that only 10.9% of flagged cases were confirmed, exposing clinicians to a deluge of false positives. Such miscalibrations not only risk alarm fatigue (Barry, 2025) but undermine the very trust that digital tools require to be effective. When technology fails to deliver meaningful signal over noise, the consequences extend far beyond inefficiency-they imperil patient safety and squander clinician attention.
Moreover, Moschogianis et al. observed widespread public misconceptions about what AI can (and cannot) do. Here, transparency becomes both essential and fraught. As Goldman (2025) notes, open disclosure of AI safety test results-such as Anthropic’s controlled experiments involving controversial behaviors-can stoke fear if misinterpreted. Yet, institutions like Stanford’s AI Institute argue that such transparency is foundational to responsible innovation. The tension is not transparency vs. secrecy, but responsible contextualization of disclosures.
In this evolving landscape, clinicians must no longer see themselves as passive users of AI. As Desapriya (2025) suggests, a new kind of "AI literacy" is an essential-one that equips clinicians to interpret model calibration, understand false positive/negative tradeoffs, and remain steadfast stewards of clinical judgment.
We propose a framework of Calibrated Trust Triangulation to guide AI integration in primary care:
• Patients must be given clear, accessible explanations of AI’s role and its limitations, correcting myths and fostering informed consent.
• Clinicians must maintain their epistemic authority, using calibrated clinical reasoning-not automation bias-as the lens through which AI outputs are filtered.
• Developers and regulators must prioritize rigorous calibration, independent validation, and transparent disclosures that inform without sensationalizing.
Harris et al. caution that technology in medicine can sometimes resemble “a solution in search of a problem.” But even when a real problem exists, if the proposed solution is inadequately explained, insufficiently calibrated, or poorly contextualized, it risks doing more harm than good.
The future of primary care AI must be one where efficiency never eclipses empathy, compassion, human touch and where digital augmentation respects the irreplaceable human core of care.
References:
Harris B, Kochendorfer K, Hasnain M, Jimbo M. Information technology in primary care screenings: ready for prime time? Ann Fam Med. 2025 May;23(3):179-80. doi:10.1370/afm.250198.
Lampert J, de Marvao A, Su J, et al. Calibration of ECG-based deep-learning algorithm scores for patients flagged as high risk for hypertrophic cardiomyopathy. NEJM AI. 2024.
Barry J. Commentary on Lampert et al. NEJM AI LinkedIn Discussion [Internet]. 2025 [cited 2025 May 28]. Available from: https://www.linkedin.com/feed/update/urn:li:activity:7332764953419915265...(activity%3A7332764953419915265%2C7333051489038315520)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7333051489038315520%2Curn%3Ali%3Aactivity%3A7332764953419915265)
Goldman S. When an AI model misbehaves, the public deserves to know and to understand what it means. Fortune Magazine. 2025 May. - https://www.msn.com/en-us/news/technology/when-an-ai-model-misbehaves-th...
Desapriya E. Reflections on AI calibration in clinical settings. NEJM AI LinkedIn Discussion [Internet]. 2025 [cited 2025 May 28]. Available from: https://www.linkedin.com/feed/update/urn:li:activity:7332764953419915265...(activity%3A7332764953419915265%2C7333051489038315520)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7333051489038315520%2Curn%3Ali%3Aactivity%3A7332764953419915265)
Moschogianis S, Darley S, Coulson T, Peek N, Cheraghi-Sohi S, Brown BC. Seven opportunities for artificial intelligence in primary care electronic visits: qualitative study of staff and patient views. Ann Fam Med. 2025.