Abstract
Context: Quality of care is improved by a current and accurate problem and medication list but problem and medication lists in modern day electronic health records (EHRs) often become lengthy and out of date, impairing their utility. Maintaining these lists is oftentimes a manual task, which is time consuming and contributes to cognitive overload, thus increasing risk of burnout.
Objective: To investigate the size and severity of efficiency issues with the problem and medication lists within AllianceChicago (AC) network of community health centers (CHCs).
Design and Analysis: Mixed-method study with EHR data analysis and survey components.
Setting or Dataset: 368,016 patients at 1,246,645 encounters from 24 participating AC CHCs, serving historically underserved patient populations including low-income, uninsured, and publicly insured individuals.
Population Studied: Diverse patient group 18 years or older with more than one visit to a health center.
Intervention/Instrument: EHR data pull and survey with qualitative and quantitative components.
Outcome Measures: Length, duration, duplication, and correlation of review attestation and length of problem and medications on respective lists. Survey outcomes include frequency of review and perceived burden level of review activities.
Results: 66,415 patients had problem lists with over 20 diagnoses and 5,491 patients had over 40. Problem lists often have duplication, with 83,090 patients (23%) having at least one duplicate diagnosis. The length of a problem and medication list does not correlate with attestation of problem list being reviewed. Problems and medications remain on lists for a long time. Surveys revealed that there is no clear consensus on “reviewing problem/medication lists”. Thorough review of the list requires a high level of mental effort.
Conclusions: Findings revealed that problem lists are extremely long and have duplication and that attestation of reviewing problem and medication lists may not always lead to outcomes intended by the regulatory bodies who created the performance measures in the EMR design. The problem that the metric initially was designed to address is complex, vaguely-defined, and not a straightforward process. Our findings have important implications for those involved in clinical decision support (CDS) and could be used to improve CDS tools. We believe that review processes and attestation metrics warrant closer examination and further study.
- © 2023 Annals of Family Medicine, Inc.