Intended for healthcare professionals

  1. Urmimala Sarkar, professor12,
  2. Lipika Samal, assistant professor34
  1. 1Division of General Internal Medicine, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
  2. 2Center for Vulnerable Populations, University of California San Francisco, San Francisco, CA, USA
  3. 3Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
  4. 4Department of Medicine, Harvard Medical School, Boston, MA, USA
  1. Correspondence to: U Sarkar Urmimala.Sarkar{at}ucsf.edu

Current performance is disappointing, we should change direction

The linked meta-analysis (doi:10.1136/bmj.m3216) by Kwan and colleagues of 122 trials of clinical decision support systems embedded in electronic health records shows modest improvements in care processes overall, with widely varying effects among trials.1 The authors found no significant improvement in clinical outcomes in the subset of 30 trials that included them.

The strengths of this meta-analysis include the large number of studies across a variety of clinical conditions and health systems. The authors reported absolute differences in care processes between groups (with versus without clinical decision support systems) rather than reporting the odds of receiving the recommended care, which underscores the small to moderate benefits of clinical decision support systems compared with usual care for selected care processes.

The authors also found extreme heterogeneity among studies that could not be attributed to any of the factors known to differ across interventions. The decision to include only quasi-randomized trials might have excluded studies of innovations in clinical decision support systems when randomization was either not acceptable to stakeholders or not feasible. Including rigorous non-randomized study designs would have given a more comprehensive overview of clinical decision support systems studies being conducted today.

The disappointing performance of decision support systems embedded within electronic health records suggests it is time to change our approach. This lack of efficacy likely reflects the challenges of developing innovative, safe, and effective clinical decision support systems within commercial electronic health record platforms. First, well documented problems with usability23 and widespread dissatisfaction among clinicians using4 electronic health records might be a barrier to effective clinical decision support. Second, the underlying software architecture of electronic health records constrains options for the design of clinical decision support systems5 and might not be the best site for innovative approaches.6

Finally, individual companies have created their own “language” for data within electronic health records—such as identifiers for drugs—which differs from internationally accepted standards.78 This lack of consistency means that independent clinical experts need to “learn the language” for each electronic health record platform in order to develop decision support within it. This creates a barrier to development, particularly development across different platforms.

Decision support systems are often designed to “bolt on” to the electronic health record. These third party applications typically retrieve data from the record and, in some cases, send recommendations back to clinicians through the same route. These bolt on systems are an important component of the current clinical decision support systems landscape but were excluded from Kwan and colleagues’ meta-analysis. Third party decision support applications might be more effective than the embedded approaches they evaluated and further research should be done to explore this possibility.

This study still has important implications. First, the premise that clinical decision support systems alone will improve clinical care should be re-examined. In the outpatient setting, where most of the included trials occurred, there are many substantial barriers to providing guideline recommended care. Reminders to clinicians in the form of decision support systems might not address issues such as the lack of time for preventive care,9 the greater efficacy of preventive care when delivered through population approaches, and the need for non-physician healthcare workers to participate in preventive care tasks.10

Patient engagement is critical to high quality care in outpatient settings and has not been a focus of clinical decision support systems to date.11 Systems typically do not address the need for patient participation, such as attendance for appointments or adherence to management recommendations. Clinical decision support systems should be considered only one part of an integrated approach to closing quality gaps in medical care, rather than a stand-alone solution.

We recommend a multifaceted strategy to enhance the effectiveness of clinical decision support systems in practice. First, vendors should remove barriers to creating, implementing, and sharing clinical decision support systems approaches that can be integrated within electronic health records so that the most usable, feasible, and effective solutions can be identified and scaled up.

Second, the design should arise from a collaborative, multidisciplinary, understanding of clinician and team workflows, informed by human factors engineering. Third, implementation of decision support systems must occur alongside co-interventions to influence clinicians’ behavior. Strategies such as clinician education and training and behavioral “nudges” such as default orders for recommended care options should be tested during implementation.

Fourth, further research is needed to integrate decision support systems with patient engagement strategies ranging from education and shared-decision-making aids to self-scheduling. Fifth, these systems can and should evolve, using machine learning and artificial intelligence, to develop tailored and relevant decision support that minimize alert fatigue.

Finally, we agree with Kwan and colleagues that evaluation of clinical decision support systems should include context specific implementation measurements, such as the number of dismissed alerts, the time required to address recommendations, and clinician satisfaction.

Clinical decision support systems will continue to be an area of innovation and research, and we will only realize their true potential to improve healthcare and patient outcomes if we learn what does not work, as well as looking for what does.

Footnotes

  • Research, doi: 10.1136/bmj.m3216
  • Competing interests: We have read and understood BMJ policy on declaration of interests and declare the following interests: US holds contract funding from AppliedVR, Inquisithealth, and Somnology; serves as a scientific/expert advisor for non-profit organizations HealthTech 4 Medicaid and for HopeLab; and has been a clinical advisor for Omada Health, and an advisory panel member for Doximity.

  • Provenance and peer review: Commissioned; not peer reviewed.

References