Abstract
PURPOSE We conducted a randomized controlled trial to compare the effectiveness of adding various forms of enhanced external support to practice facilitation on primary care practices’ clinical quality measure (CQM) performance.
METHODS Primary care practices across Washington, Oregon, and Idaho were eligible if they had fewer than 10 full-time clinicians. Practices were randomized to practice facilitation only, practice facilitation and shared learning, practice facilitation and educational outreach visits, or practice facilitation and both shared learning and educational outreach visits. All practices received up to 15 months of support. The primary outcome was the CQM for blood pressure control. Secondary outcomes were CQMs for appropriate aspirin therapy and smoking screening and cessation. Analyses followed an intention-to-treat approach.
RESULTS Of 259 practices recruited, 209 agreed to be randomized. Only 42% of those offered educational outreach visits and 27% offered shared learning participated in these enhanced supports. CQM performance improved within each study arm for all 3 cardiovascular disease CQMs. After adjusting for differences between study arms, CQM improvements in the 3 enhanced practice support arms of the study did not differ significantly from those seen in practices that received practice facilitation alone (omnibus P = .40 for blood pressure CQM). Practices randomized to receive both educational outreach visits and shared learning, however, were more likely to achieve a blood pressure performance goal in 70% of patients compared with those randomized to practice facilitation alone (relative risk = 2.09; 95% CI, 1.16-3.76).
CONCLUSIONS Although we found no significant differences in CQM performance across study arms, the ability of a practice to reach a target level of performance may be enhanced by adding both educational outreach visits and shared learning to practice facilitation.
- primary health care
- cardiovascular disease
- quality improvement
- chronic illness
- prevention
- health promotion
- quantitative methods
- health services
- organizational change
- practice-based research
INTRODUCTION
Past efforts to transform primary care have included practice redesign based on medical home principles and adoption of electronic health records.1–4 More recently, primary care practices face increasing expectations to improve the quality of care they deliver to their patients.5,6 The Centers for Medicare & Medicaid Services (CMS) and others have moved to value-based reimbursement tied to improved performance on quality metrics.7 Gains in quality of care have been uneven across primary care settings and have faltered for some indicators.8 For example, from 1999 to 2016, there was no consistent improvement in blood pressure (BP) control among patients treated for hypertension.9
In 2015, the Agency for Healthcare Research and Quality launched the EvidenceNOW initiative, a $112 million national program that funded 7 implementation studies within regional cooperatives across the United States.10–12 The goal of the program was to understand how to best build and support the capacity of primary care practices to receive and incorporate new evidence into practice and thus improve their quality of care. The focus of improvement was on cardiovascular disease (CVD) risk factor control, thus aligning EvidenceNOW with the Department of Health and Human Services’ Million Hearts initiative. CVD remains the leading cause of avoidable morbidity and mortality in the Unites States.13 Clinical trials and population-based studies provide a strong evidence base for addressing 4 CVD risk factors in primary care settings through the corresponding ABCS interventions: aspirin therapy in high-risk patients, BP control, cholesterol management, and smoking screening and cessation counseling.14
Improving care quality requires developing the quality improvement (QI) capacity within primary care to support needed changes in how care is delivered. This requirement is especially true for smaller practices that comprise nearly one-half of all primary care settings.15 They often lack the staffing and resources to invest in the infrastructure and training required to provide essential competencies and resources needed to conduct effective QI activities.16,17 Even when these practices have resources and are committed to QI in principle, they often struggle with developing and implementing improvement strategies.18 Major disruptions such as clinician and staff turnover or changes in health information technology systems are common.19 In addition, these practices struggle with generating the performance reports needed to guide QI activities.20 Many have proposed providing external support to overcome these challenges and assist these practices in making changes required to improve care quality.11,21,22
Three specific external practice support strategies have some evidence of effectiveness in improving care quality in primary care settings: practice facilitation,23,24 shared learning, and educational outreach.22 Practice facilitation is delivered by a trained practice facilitator, usually external to the practice setting, who meets with those who work within a practice on a recurring basis over time to assist them with implementing a change in care delivery.24–27 As described by Berta and colleagues,26 “…facilitation is a concerted, social process that focuses on evidence-informed practice change and incorporates aspects of project management, leadership, relationship building, and communication.” Shared learning opportunities, where practices share information to learn QI practices from one another, can also improve care.28 Educational outreach occurs when a trained outside expert delivers brief educational content to a health care professional or clinical team.29,30 Although practice facilitation has a strong evidence base, little is known about the benefit of supplementing practice facilitation with shared learning, educational outreach, or both, to improve care quality in primary care settings.
Here we present the results of the Healthy Hearts Northwest (H2N) randomized controlled trial. The primary aim of the study was to compare the effectiveness of adding enhanced practice support interventions— shared learning opportunities, educational outreach visits, or both—to practice facilitation to improve performance on CVD risk factor management in smaller primary care practices. We hypothesized that improvement in clinical quality measures (CQMs) for CVD risk factors would be greater among practices assigned to the enhanced practice support arms of the study compared with practice facilitation alone.
METHODS
Study Design and Setting
H2N is 1 of 7 regional cooperatives funded by the Agency for Healthcare Research and Quality under the EvidenceNOW initiative.12 Details about the study protocol have been previously published.31 Briefly, a 2-by-2 factorial design was used to compare the effectiveness of adding shared learning, educational outreach visits, or both to practice facilitation. The trial therefore had 4 intervention arms: (1) practice facilitation alone, (2) practice facilitation and shared learning, (3) practice facilitation and educational outreach visits, and (4) practice facilitation with both shared learning and educational outreach visits. The study took place within smaller primary care practices across Washington, Oregon, and Idaho. To be eligible, practices were required to have fewer than 10 full-time clinicians in a single location and participate in stage 1 meaningful use federal certification for their electronic health record.32 This study was reviewed and approved by the Kaiser Permanente Washington Health Research Institute’s Institutional Review Board.
Interventions
Practice Facilitation
Practice facilitation support was provided by 2 organizations, Qualis Health in Washington and Idaho, and the Oregon Rural Practice Research Network (ORPRN) in Oregon. Sixteen facilitators provided 15 months of active support to the 209 randomized practices. The facilitation protocol included at least 5 face-to-face quarterly practice facilitation visits, with at least monthly contact (in-person visits, telephone calls, or e-mails) in between those in-person visits. Facilitators met with a QI team within each clinic to assist them in developing and testing plan-do-study-act cycles of improvement focused on the ABCS measures. Facilitators were guided in their activities by assessing and working with practices on 7 high-leverage changes adapted from prior work and experience with supporting medical home practice transformation: (1) embed clinic evidence into daily work, (2) use data to understand and improve care, (3) establish a regular QI process, (4) identify at-risk patients for outreach, (5) define roles and responsibilities for improving care, (6) deepen patient self-management support, and (7) link patients to resources outside of the clinic.33 Two separate in-person 1-day training sessions were held for facilitators from both organizations, and all facilitators participated in monthly telephone calls to harmonize their approach.
Shared Learning
It was not practical or feasible to offer a traditional learning collaborative given the geographic spread and timeline of the study. Instead, practices randomized to the shared learning arm of the study were offered the opportunity to visit an enrolled practice with a particularly strong or innovative approach to QI. We were concerned about the ability of these small practices to free up an individual to spend a day away from the practice for these visits. They were therefore also offered the opportunity to participate in 2 virtual 1-hour shared learning conference calls with such an exemplar practice. Exemplars were identified through nominations from practice facilitators and other members of the H2N study team. Shared learning focused on improvement strategies used by the exemplar practice and roles and responsibilities for improvement within the practice’s team. For those who participated in the telephone calls, each participating practice identified a promising approach or activity it was willing to try during the first call, then reported on its experience during the second call.
Educational Outreach Visits
The purpose of the educational outreach visits was to encourage use of a CVD risk calculator within patient encounters.34 The design of the educational outreach visit was based on the principles of academic detailing and is described in more detail elsewhere.35 Briefly, with input from a small advisory group of primary care clinicians, the study team developed the educational outreach visit protocol to address priority topics and issues related to implementing CVD risk calculation within daily clinic work. The advisory group of clinicians emphasized the need to keep the length of the educational outreach visit to less than an hour because of the high levels of competing demands faced by primary care clinicians. The educational outreach visit consisted of a 30-minute interactive webinar and telephone call between 1 or more clinicians and members of their care team within an enrolled clinic and a physician academic expert. The interaction focused on eliciting current practices, attitudes, and beliefs about CVD risk calculation, as well as perceived barriers, and on identifying specific strategies to overcome those barriers. A follow-up e-mail was sent to the participants and their practice facilitator documenting commitments made by the clinicians or members of their team during the call.
Randomization
Enrolled practices were categorized into 1 of 8 strata defined by their practice facilitation support organization (Qualis Health or ORPRN), prior practice experience obtaining customized data to drive QI (yes or no), and prioritization of the work of improving CVD risk factors (high or low). Within each stratum, practices were randomly assigned by a computer-generated randomization scheme to 1 of the 4 intervention arms.
Data Collection and Measures
A practice questionnaire completed by an office manager in each practice was collected at baseline and provided information about the practice such as numbers of clinicians and staff, as well as characteristics of their patient population. Outcomes for the study, the CVD risk factor CQMs, were defined for each practice as the percent of patients in the risk factor target population who met the defined clinical quality criteria. All CQMs were endorsed by CMS.36 The primary study outcome, as stated in our published study protocol, was the CQM for BP control (CMS 165),37 defined as the percent of patients aged 18 to 85 years with previously diagnosed hypertension (denominator) from each practice who achieved adequate blood pressure control (<140/90 mm Hg) (numerator). Secondary outcomes were appropriate aspirin use (CMS 164)38 and tobacco use screening and cessation (CMS 138).39 We a priori chose BP control as the primary outcome because improvements in this measure require more marked changes in clinical care of patients than changes in workflow and documentation, which alone can sometimes result in improved rates of aspirin use or tobacco screening and cessation. The cholesterol and statin therapy measure (CMS 347) was under revision at the start of the study based on recent changes in evidence-based clinical guidelines; as a result, practices experienced considerable challenges in obtaining this measure from their electronic health record, so it is not included in the analysis. Each practice submitted numerator and denominator data on each CQM measure using a rolling 12-month look-back period. The study protocol called for practices to submit CQM data quarterly.
Analyses
The primary aim of the study was to compare the effectiveness of adding shared learning opportunities, educational outreach visits, or both to practice facilitation on the 3 CQMs. Our primary outcome was the practice-level change in the BP CQM from baseline to the postintervention follow-up. We defined baseline as the 2015 calendar year before randomization (January 1, 2015 to December 31, 2015) and follow-up as calendar year 2017 (January 1, 2017 to December 31, 2017).
Before analysis, CQM data were assessed for data quality. Two members of the coordinating center analysis team (M.L.A., E.S.O.), who were blinded to study arm, independently identified and adjudicated highly improbable values by examining trends in the data submitted by each practice. Discrepant evaluations were reviewed for consensus, and values found to be implausible were set to missing. Missing CQM data for the primary time points were imputed when possible by using values from adjacent quarters (next quarter carried backward for baseline and last value carried forward for follow-up measures). For our primary outcome, BP, data were imputed for only 6 practices in 2015 and 5 practices in 2017. As a sensitivity analysis, primary and secondary analyses were repeated using the original data as submitted; as study conclusions were unchanged, only results including imputed outcomes are reported.
To assess intervention effects on the primary study outcome of BP, we fit a linear regression model at the practice level, with change in the percent of patients in the practice achieving the BP CQM target as the dependent variable and indicators for intervention groups as independent variables. Models used generalized estimating equations with a robust variance estimator, and accounted for potential correlation between practices with the same practice facilitator (cluster).40 Models adjusted for practice facilitation support organization (Qualis Health or ORPRN), baseline prior practice experience obtaining customized data reports (yes or no), baseline prioritization of QI work (high or low), and the baseline BP CQM. By adjusting for practice facilitation support organization, we intended to control for differences in history and background regarding primary care transformation efforts in Oregon vs Washington and Idaho, and other unmeasured differences between practice context in these 2 geographic areas. To assess statistical significance, we used the Fisher protected least significant difference approach, to control for multiple comparisons. We first calculated an omnibus F test to assess whether there were any significant differences between intervention groups, and considered pairwise comparisons only if that test was statistically significant.
Analyses followed an intention-to-treat approach, analyzing practices according to randomized group assignment regardless of engagement with and participation in intervention activities. We attempted to obtain outcome data for all randomized practices, including those that did not actively participate in the intervention or dropped out of the study. To account for potential bias due to missing outcome data, however, we used inverse probability weights in the final outcomes model to balance intervention groups with respect to baseline practice characteristics. To construct the weights, we fit a logistic regression model with a binary indicator if the CQM outcome was observed as the dependent variable (yes or no), and practice characteristics as independent variables. The inverse of the estimated probability that the CQM outcome was observed was used for weighting in the outcome model. Similar analyses were conducted for secondary outcomes, the aspirin therapy and smoking CQMs.
In addition to the primary analysis, we also assessed group differences in practices’ ability to reach the Million Hearts goal of 70% or higher on the BP CQM at follow-up. Our rationale for doing so was that practices enrolled in the study were both told about this target and provided a visual dashboard during each quarterly meeting with their practice facilitator that showed how close they were to this goal. We fit a generalized linear model with log link and robust variance estimation to estimate relative risks of achieving the 70% threshold for each intervention group relative to the practice facilitation–only group. The model adjusted for the same variables as the primary outcome model, and accounted for clustering by practice facilitator. Analyses were performed using Stata statistical software, version 15.0 (StataCorp, LLC).
RESULTS
A total of 259 smaller primary practices enrolled in the study. Of these, 50 withdrew before randomization, resulting in 209 randomized practices (Figure 1). Overall, the practices received an average of 7.9 (SD 3.5) in-person practice facilitation visits lasting 30 minutes or more during the 15-month intervention period. The number of visits did not differ significantly by study arm, with a range of 7.6 to 8.4 visits across arms. Of the 104 practices randomized to educational outreach visits either alone or in combination with a shared learning site visit, 44 (42%) participated. Of the 104 randomized to a shared learning site visit either alone or in combination with educational outreach visit, 28 (27%) participated.
Practice characteristics are shown in Table 1. Most practices had 2 to 5 physicians, slightly more than 43% were rural, and 46% were owned by independent physicians.
A total of 183 (87.6%) of the 209 randomized practices successfully submitted numerator and denominator outcome data, for both baseline (2015) and postintervention follow-up (2017) for the BP CQM, compared with 141 (67.5%) for the aspirin CQM and 151 (68.9%) for the smoking CQM. Across all practices, performance on each CQM improved from baseline to follow-up, from 61.6% to 64.7% of patients meeting the clinical target for BP, from 67.4% to 71.3% for aspirin, and from 74.6% to 81.6% for smoking. Unadjusted baseline and follow-up CQM outcomes by study arm are shown in Table 2. For example, for practices in the practice facilitation–only arm, the mean percent of patients with a prior hypertension diagnosis who had controlled BP (<140/90 mm Hg) was 58.2% at baseline and 62.5% at the postintervention follow-up. At baseline, CQM attainment tended to be highest for the smoking CQM across study arms and lowest for the BP CQM.
Estimates of intervention effects on the CQM outcomes during the 2-year study period are shown in Table 3. Improvements in CQM performance over time were found within each study arm, but there were no statistically significant differences between arms (overall P values >.40 for all comparisons). The largest improvements (adjusted mean changes of greater than 4%) were seen in the arms that included shared learning, either with practice facilitation or in combination with both practice facilitation and educational outreach visits.
Regarding differences across arms in practices’ ability to achieve the Million Hearts performance goal of 70% on the BP CQM14 at follow-up, the likelihood of reaching this performance goal was higher among practices randomized to receive shared learning, with 35.1% (95% CI, 19.2%-51.0%) achieving this mark among practices in the practice facilitation and shared learning arm, and 38.9% (95% CI, 26.7%-51.1%) in the practice facilitation and both shared learning and educational outreach visit arm, compared with 18.6% (95% CI, 4.0%-33.3%) in the practice facilitation–only arm (Figure 2). This difference reached statistical significance for practices randomized to receive both educational outreach visits and shared learning, with a roughly doubling of the likelihood of achieving the goal compared with practice facilitation alone (relative risk, 2.09; 95% CI, 1.16-3.76), but was not significant for practice facilitation and shared learning compared with practice facilitation alone (relative risk, 1.88; 95% CI, 0.62-5.69).
DISCUSSION
Smaller primary care practices provided with external support had modest improvements in their CQMs for CVD risk factors, although absolute changes in performance did not differ significantly between practices randomized to receive enhanced support (shared learning, educational outreach visits, or both) and practices randomized to receive practice facilitation alone. Those randomized to receive both educational outreach visits and shared learning in addition to practice facilitation, however, were more likely to achieve the Million Hearts BP performance goal of at least 70% of eligible patients compared with those randomized to practice facilitation alone. The change between BP CQM from baseline to postintervention follow-up within the arms of the study that included shared learning was approximately twice as large as that seen with practice facilitation alone, but again, the observed differences between arms were not significant.
The conclusions from our intent-to-treat analysis are limited by the low rates of participation of practices in the enhanced support interventions: 42% among those offered an educational outreach visit and 27% among those offered shared learning. We redid the analysis using a per-protocol approach, that is, including only practices who participated in the interventions as described in the methods. Because of the small sample size, these per-protocol analyses were restricted to estimation of the main effects. We again did not find any significant differences (data not shown), so the results and conclusions were not altered by this analysis. In addition, we looked for evidence of participation bias by comparing participants with nonparticipants based on practice characteristics (size, ownership, rural vs urban location), their baseline BP CQM performance, and the priority they placed on improving CVD risk factors, and found no differences (data not shown).
It is possible that participating in the educational outreach visits, shared learning, or both exceeded the capacity of many practices to invest in additional improvement efforts beyond meeting with a practice facilitator. This barrier would be consistent with findings from other studies that describe primary care practice transformation as “hard work”41 with highly variable change capabilities across practices, and the development of “change fatigue.”3 One H2N facilitator noted in field notes: “Clinic feels overwhelmed by randomization arm ... even though I explained that it was simply an added learning opportunity … .” Some H2N practices had multiple QI initiatives underway at the same time as H2N. For one practice, the facilitator commented: “Single clinician site involved in 3 QI initiatives. Need to prioritize time for staff involvement in meetings and work across [other] initiatives. Not able to stretch to make this happen.” It was not uncommon for a practice to ask their facilitator for a break or some time off from working on CVD risk factor improvement: “…we are putting them on a hiatus period where they are not scheduling new visits with H2N but will continue to receive communications about the project.” In addition, many enrolled practices experienced a considerable disruption such as the departure of a clinician or office manager during the study, similar to findings from other EvidenceNOW collaboratives.19 Finally, it is worth noting that we were unable to provide any financial incentives or payments to the enrolled practices for participating in the study, limiting their ability to devote resources to study-related activities.
Why were practices randomized to both educational outreach visits and shared learning more likely to achieve a performance of at least 70% of patients with BP control, when the absolute change in this CQM was not significant? This finding may be attributable to the previously observed threshold effect in pay-for-performance evaluations.42 That is, people strive to reach a goal, but then stop further improvement once it is reached. Practices enrolled in this study were provided with feedback on their performance that included the 70% Million Hearts performance goal for BP. Given the overall baseline level of performance of 63.4% for the BP CQM, some practices may have curtailed further efforts to improve once they reached the 70% goal, limiting the absolute change in the BP CQM seen across study arms. When analyzed as an dichotomous outcome, however, this threshold resulted in a significant finding.
In addition to limited participation on the shared learning and educational outreach visit interventions, a few other limitations deserve note. These interventions were “light touch” with a low dose of contact time with the practices compared with the practice facilitation intervention. Given this light touch, the results presented here may not be entirely unexpected. It is also possible that observed improvements in CVD risk factor CQMs may be attributable to external factors such as the CMS Quality Payment Program.43 Findings from studies about the influence of financial incentives on quality of care are mixed at best, however, and some have concluded that evidence is lacking.44
Although the observed changes in performance on these CVD risk factors are small, they have great potential for population-level impact on CVD events such as heart attacks and strokes.45 For example, approximately 200,000 patients (in the 183 practices that reported valid measures at both time points) had a diagnosis of hypertension. The observed increase of 3% in the BP CQM translates to 6,000 additional hypertensive patients achieving a BP of less than 140/90 mm Hg across these practices. Given that people with BP above this threshold develop cardiovascular disease 5.0 years earlier,46 these modest improvements in performance may have substantial impact on subsequent cardiovascular events and mortality.
In conclusion, smaller practices can improve their performance on CVD risk factors with external support, and reaching a target level of performance may be enhanced by adding external supports such as educational outreach visits and shared learning opportunities to practice facilitation. These practices may lack the capacity to participate in these additional external supports, however. Additional internal resources, time, and people to accept the support offered may be required to achieve significant improvements in care quality.18,47,48
Acknowledgments
We deeply appreciate the practice facilitators from Qualis Health and the Oregon Rural Practice Research Network who devoted themselves tirelessly to outreach and support activities for the practices, and all the primary care practices across Washington, Oregon, and Idaho who bravely agreed to participate in this ambitious study.
Footnotes
Conflicts of interest: authors report none.
To read or post commentaries in response to this article, see it online at http://www.AnnFamMed.org/content/17/Suppl_1/S40.
Funding support: This project was supported by grant number R18HS023908 from the Agency for Healthcare Research and Quality. Additional support was provided by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1 TR002319.
Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality or the official views of the National Institutes of Health.
Ethics approval: Kaiser Permanente Washington Health Research Institute’s Institutional Review Board reviewed and approved this study.
Trial registration: This trial is registered with www.clinicaltrials.gov Identifier# NCT02839382. Registered July 18, 2016.
- Received for publication August 31, 2018.
- Revision received December 6, 2018.
- Accepted for publication January 9, 2019.
- © 2019 Annals of Family Medicine, Inc.