Abstract
PURPOSE The purpose of this study was to evaluate patient outcomes in the National Demonstration Project (NDP) of practices’ transition to patient-centered medical homes (PCMHs).
METHODS In 2006, a total of 36 family practices were randomized to facilitated or self-directed intervention groups. Progress toward the PCMH was measured by independent assessments of how many of 39 predominantly technological NDP model components the practices adopted. We evaluated 2 types of patient outcomes with repeated cross-sectional surveys and medical record audits at baseline, 9 months, and 26 months: patient-rated outcomes and condition-specific quality of care outcomes. Patient-rated outcomes included core primary care attributes, patient empowerment, general health status, and satisfaction with the service relationship. Condition-specific outcomes were measures of the quality of care from the Ambulatory Care Quality Alliance (ACQA) Starter Set and measures of delivery of clinical preventive services and chronic disease care.
RESULTS Practices adopted substantial numbers of NDP components over 26 months. Facilitated practices adopted more new components on average than self-directed practices (10.7 components vs 7.7 components, P=.005). ACQA scores improved over time in both groups (by 8.3% in the facilitated group and by 9.1% in the self-directed group, P <.0001) as did chronic care scores (by 5.2% in the facilitated group and by 5.0% in the self-directed group, P=.002), with no significant differences between groups. There were no improvements in patient-rated outcomes. Adoption of PCMH components was associated with improved access (standardized beta [Sβ]=0.32, P = .04) and better prevention scores (Sβ=0.42, P=.001), ACQA scores (Sβ=0.45, P = .007), and chronic care scores (Sβ=0.25, P =.08).
CONCLUSIONS After slightly more than 2 years, implementation of PCMH components, whether by facilitation or practice self-direction, was associated with small improvements in condition-specific quality of care but not patient experience. PCMH models that call for practice change without altering the broader delivery system may not achieve their intended results, at least in the short term.
- Primary health care
- family practice
- organizational innovation
- patient-centered medical home
- National Demonstration Project
- patient-centered care
- outcomes assessment, patient
- practice-based research
INTRODUCTION
An emerging model to guide primary care practice improvement in the United States is the patient-centered medical home (PCMH). The PCMH is operationalized as a set of best practices for primary care delivery, but ideally also includes supportive changes in the larger system, including payment reform.1 In 2007, the major primary care organizations released a statement of guiding principles for the PCMH, including ready access to care, patient-centeredness, teamwork, population management, and care coordination.1,2 Subsequently, the National Committee for Quality Assurance created a set of practice attributes to be used for a program whereby practices can achieve recognition as PCMHs.3
Dozens of demonstration projects evaluating the PCMH are now under way across the United States, although few have sought to implement the entire set of best practices.4 Early reports from these projects are encouraging. In particular, coordination of care linked to primary care practices is substantially reducing overall costs while increasing the quality of care for patients with severe chronic illness.5,6 In addition, better outcomes at lower cost were noted after a large integrated delivery system reduced its primary care clinicians’ panel size, lengthened visits, and embedded care management in its electronic medical record (EMR).7 The cost savings in several of these projects exceeds the added investment in primary care services.6,7
As these demonstration projects are poised to be disseminated more widely, important questions about the PCMH remain unanswered, including how to best define and measure the PCMH, whether certain measures of medical home attainment correlate more closely with improved outcomes, and how practices should develop into medical homes.
Although the rationale for the PCMH is drawn mainly from studies of single attributes of primary care, such as continuous relationships, early evidence suggests that more global measures of PCMH attainment are also associated with outcomes. In separate studies, 2 different PCMH measures were associated with fewer hospitalizations and emergency department visits8 and less disparity in access.9 Whether the PCMH is associated with other important outcomes, such as patient enablement, improved overall health status, and receipt of appropriate preventive and chronic disease care, is unknown.
In the current study, we evaluated patient outcomes from the National Demonstration Project (NDP), a 2-year project funded by the American Academy of Family Physicians. The NDP was designed to help primary care practices adopt a defined set of PCMH components that emerged from the Future of Family Medicine project.10 The NDP model included elements of access, care management, information technology, quality improvement, team care, practice management, specific clinical services, and integration with other entities in the health care system and community.11
To guide our evaluation, we framed 2 overall questions for analysis in the NDP. The first question was whether adoption of NDP model components and patient outcomes would be superior in practices that worked with a practice facilitator relative to those adopting them through a self-directed process. This question bears on future strategies for promoting adoption of the PCMH. The second question was whether adoption of NDP model components would improve patient outcomes, regardless of group assignment. Answering this question is important because there will be many different implementations of the PCMH, and it will be important to understand the relative effectiveness of different versions. In answering both questions, we felt that it would be important to evaluate 2 types of outcomes: first, a set of patient-rated outcomes that are considered fundamental pillars of primary care (eg, easy access to first-contact care, comprehensive care, coordination of care, and personal relationship over time) and second, quality of care for common conditions.
METHODS
Study Design
The study was a clinical trial with randomization at the practice level and observations at both the practice level and the patient level. Data on preventive service delivery, chronic care, and patient experiences were collected in the 2 study groups (facilitated and self-directed practices) at baseline, 9 months, and 26 months. The protocol for this study was approved by the institutional review board (IRB) of the American Academy of Family Physicians in Leawood, Kansas, and the IRBs of each of the participating institutions.
Sample and Intervention
The NDP was launched in June 2006 by the American Academy of Family Physicians to implement a new model of care consistent with the PCMH.10 The methods, sample, details of the intervention, practice change processes, and an emergent theory of practice change are described elsewhere in this supplement.12–18 In brief, 36 family medicine practices from across the United States were selected from 337 applicants. Practices were randomized into either a facilitated group or a self-directed group. Facilitated practices received ongoing assistance from a change facilitator; consultations from a panel of experts in practice economics, health information technology, and quality improvement; discounted software technology; training in the NDP model; and support by telephone and e-mail. They also were involved in 4 learning sessions and regular group conference calls. Self-directed practices were given access to Web-based practice improvement tools and services, but did not receive facilitator assistance.
Collection of Patient Outcome Data and Implementation Data
Methods of data collection are described in detail elsewhere.13 Practices provided confidential lists of consecutive patients seen after 3 index dates: baseline (July 3, 2006), 9 months (April 1, 2007), and 26 months (August 1, 2008). To minimize the Hawthorne effect, these dates were disclosed to the practices only after the patient visit windows had passed. Trained research nurses collected information on rates of delivery of preventive services and measures of chronic disease care either on site or by remote access of the practices’ EMRs. Using a specified protocol, 60 consecutive patients were selected for medical record audit in each practice at each index date.13 To collect patient ratings of the practices, a consecutive sample of 120 patients at each practice at each index date received a questionnaire on demographics, primary care attributes, satisfaction with the practice, health status, and patient-centered outcomes.
PCMH Measure
One of the authors (E.E.S.) collected information on the implementation of NDP model components by visiting each practice for a 2- to 3-day evaluation and assessing the presence of specific components with telephone interviews with key informants in each practice. The NDP model PCMH measure was the proportion of the 39 measurable NDP model components that were implemented at baseline and at 26 months, as listed in Table 1⇓. This measure emphasized the technological components of PCMH implementation.
Patient Outcome Measures
We evaluated 2 categories of patient outcome measures: patient-rated outcomes and measures of the quality of care for specific conditions. Details for these measures are given in Table 2⇓.
Patient-Rated Outcomes
We assessed patient-rated outcomes from patients’ responses on a questionnaire (the patient outcomes survey). These outcomes included primary care attributes, which were drawn from previously described measures of access, comprehensiveness, coordination, and continuous relationships.19–24 Global practice experience was a new measure developed for this study using the all-or-none composite quality score based on Institute of Medicine criteria.25,26 Patient empowerment combined a patient enablement index (PEI)27–30 and a measure of the consultation process, namely, consultation and relational empathy (CARE).28–30 Satisfaction with the service relationship combined 2 items pertaining to satisfaction with one’s physician from the Ambulatory Care Experiences Survey (ACES) short form and 3 items from a cultural responsiveness survey from the American Council for Graduate Medical Education (ACGME) survey.23,31,32 General health status was measured with a single item.33
Condition-Specific Quality of Care Outcomes
We obtained 3 condition-specific quality scores from a medical record audit. The Ambulatory Care Quality Alliance (ACQA) Starter Set measure includes 16 of 26 measures proposed by this consensus group, addressing both prevention and chronic disease.34 We calculated an overall prevention score by assessing receipt of age- and sex-specific interventions recommended by the US Preventive Services Task Force in July 2006.35 We calculated a chronic disease score by examining the percentage of patients with identified target conditions receiving recommended quality measures for coronary artery disease, hypertension, diabetes, and hyperlipidemia. An overall percentage composite quality score was calculated for each of the condition-specific quality scores using the methodology of Reeves et al.26 We also attempted to measure quality of depression care and of acute care for upper respiratory tract infections, but the samples within practices were too small to reliably calculate estimates for these items.
Analyses
Our overall analytic strategy was to assess the clinical and statistical significance of changes in patient-rated outcomes in the NDP from baseline to the 26-month end point. Because the serial waves of patient surveys and medical record audits included different patient samples over time, repeated-measures analysis at the patient level was not appropriate; the unit of analysis is therefore the clinic. For analysis of outcomes, we used patient data aggregated by clinic.
To compare the effect of facilitated and self-directed interventions at the clinic level, we used a full factorial repeated-measures analysis of variance (ANOVA) model where group assignment, time (baseline vs 26-month end point), and the interaction effect of group and time were used as predictors of the outcomes. Time was the within-practice factor and group was the between-practice factor.
To evaluate whether adoption of NDP model components was associated with patient outcomes, we used a set of generalized linear repeated-measures models, with each patient-rated or condition-specific outcome at 26 months as the dependent variable in 1 model, and included the change in the NDP model components as a covariate in the model, thus controlling for the outcome measure at baseline. We evaluated the direction and statistical significance of the relationship between NDP model components and the outcomes.
Note that power in these practice-level analyses was low. Depending on the specific model, power ranged from .30 to .57 for main effects and was even lower to detect interaction effects. Because of this limitation in power, we considered a difference having a P value of less than .15 to be a trend.
RESULTS
Sample Characteristics and Response Rates
A total of 36 practices enrolled in the NDP and 31 completed the study—16 in the facilitated group and 15 in the self-directed group. One facilitated practice withdrew because the larger system IRB could not approve participation in the study, and the other facilitated practice closed during the NDP because of financial pressures. One self-directed practice felt that the NDP data collection requirements were too burdensome in the context of other practice priorities, and 2 other self-directed practices closed during the NDP (1 when the rural hospital across the street closed and 1 when the larger health system decided to close the practice for health system priorities beyond the practice).
A total of 1,067 patients from practices completing the NDP returned questionnaires at baseline (29% response rate), 882 patients did so at 9 months (24% response rate), and 760 did so at 26 months (21% response rate). The medical record audits included 1,964 patients at baseline (99.9% review rate) and 1,861 at the 26-month assessment (100% review rate). Analyses reported here include only the 29 practices with both baseline and 26-month data. One facilitated practice completed the study but was unable to provide baseline medical record data because their EMR data were lost during the NDP. Another facilitated practice was unable to provide a patient roster needed for the last wave of patient surveys because of competing priorities.
The characteristics of patients completing questionnaires and patients whose medical records were reviewed are shown in Table 3⇓. Respondents from the facilitated practices were older than those from self-directed practices. The proportions of women, minorities, and those with higher educational attainment were similar across groups. A higher proportion of patients in the self-directed practices had been using the practice for 10 years or less.
Patient Outcomes in Facilitated vs Self-Directed Practices
At baseline, facilitated practices had an average of 17.0 NDP model components in place (44% of all components) and self-directed practices had an average of 20.1 components in place (52% of all components) (P = .02). Both facilitated and self-directed practices added NDP model components during the 26-month follow-up, but facilitated practices added more, 10.7 components vs 7.7 in the self-directed group (ANOVA, F test for group-by-time interaction: P = .005).
Table 4⇓ shows a comparison of facilitated and self-directed practices’ patient-rated and condition-specific outcomes. In terms of condition-specific quality of care, we observed absolute improvements in ACQA scores over 26 months of 9.1% in the self-directed group and 8.3% in the facilitated group (ANOVA, F test for time [within-group] effect: P <.001). The group difference was not statistically significant (group-by-time interaction: P = .85). Absolute improvements in chronic care scores over 26 months were smaller, 5.0% in the self-directed group vs 5.2% in the facilitated group (ANOVA, F test for within-group effect: P = .002) and did not differ between groups (group-by-time interaction: P = .92). Absolute improvements in prevention scores were not statistically significant, but there was a trend for the group-by-time interaction favoring the facilitated group (P = .09).
In contrast, there were no significant improvements in patient-rated outcomes, including ratings of the 4 pillars of primary care (easy access to first-contact care, comprehensive care, coordination of care, and personal relationship over time), global practice experience, patient empowerment, and self-rated health status. There were trends for very small decreases in coordination of care (P = .11), comprehensive care (P = .06), and access to first-contact care (P = .11) in both groups.
Adoption of the NDP Model Components and Changes in Patient Outcomes
At baseline, practices had an average of 46% of the NDP model components in place (range, 20%−70%). Baseline scores for a composite of the 4 pillars of primary care as rated by patients (easy access to first-contact care, coordination of care, comprehensive care, personal relationship over time, and global practice experience) among practices averaged 3.5 on a 5-point scale (range, 3.1–4.2). The percentage of model components and the composite score of patient-rated primary care were not significantly correlated at baseline (Pearson correlation = −0.08).
Over the course of the NDP, practices increased their proportion of NDP model components adopted by an average of 24% (range, 0%−50%), while the patient-rated primary care attributes score decreased by an average of 0.05 (95% confidence interval, −0.13 to 0.02), or about one-fifth of a standard deviation. Changes in NDP model components and patient-rated primary care attributes over time were only weakly correlated (Pearson correlation = 0.11, P = .55). The covariation of practices’ progress in adopting the model components and patient-rated primary care attributes is displayed in Figure 1⇓. Adoption of model components during the NDP was associated with improved access (standardized beta [Sβ] = 0.32, P = .04) and with better prevention scores (Sβ = 0.42, P = .001), ACQA scores (Sβ = 0.45, P = .007), and chronic disease scores (Sβ = 0.25, P = .08). Adoption of NDP model components was not associated with patient-rated outcomes other than access, including health status, satisfaction with the service relationship, patient empowerment, coordination of care, comprehensiveness of care, personal relationship over time, or global practice experience.
DISCUSSION
Main Findings
Practices in both the facilitated and self-directed groups were able to adopt multiple components of the NDP model of the PCMH over 26 months. Practices that received intensive coaching from a facilitator adopted more model components. Adopting these predominantly technological elements of the PCMH appeared to have a price, however, as average patient ratings of the practices’ core primary care attributes slipped slightly, regardless of group assignment.
Overall, only the condition-specific quality of care measures improved, and only modestly, over 26 months. Yet important variations were evident within this overall result. Practices that adopted more model components achieved better quality of care scores for chronic disease management, ACQA measures, and prevention services. That NDP model components and patient-rated outcomes were poorly correlated indicates that there may be trade-offs, at least in the short term, in implementing these components.
Strengths and Limitations
The study had a number of notable strengths. Enrolled practices were diverse in size and geographic location, although most were small, nonacademic, and independent practices, similar in practice organization to those that still constitute the bulk of the US primary care workforce.36 Patient-level outcomes included a broad and deep array of measures including ratings of the primary care experience, health care quality (medical record measures), patient empowerment, health status, and well-validated measures of primary care’s core attributes, as well as an Institute of Medicine–defined summary score (global practice experience).25 We were able to obtain nearly complete data from the medical record audit for the condition-specific outcomes studied. Adoption of NDP model components was measured by a combination of direct observation and key informant reports. This multimethod process evaluation of practice transformation also enabled a quantitative assessment of practices’ progress in implementing model components, as reported elsewhere in this supplement.16
The study is subject to a number of important limitations, however. Practices chosen for the NDP from a large number of applicants likely represent a selected group of highly motivated practices participating in a high-profile demonstration project. Improvements observed during the NDP may therefore demonstrate what can be achieved when prior ambition and commitment are high. Although few practices dropped out, given the small number of practices initially enrolled, the power to detect small differences in outcomes was limited. It should also be emphasized that the study lacked a true control group, as the self-directed group received a low level of support; thus, inferences about whether attaining the PCMH as defined in the NDP model improves outcomes are based on the overall sample rather than the comparison of facilitated and self-directed practices. These inferences should also be interpreted in the context of a 26-month observation interval. Given the extensive changes asked of the practices, they may have needed more time for ongoing cycles of executing and adapting to change. Also, as noted in the Methods section, statistical power for time and group comparisons was limited.
Limitations in patient-level data included the low response rates to the patient questionnaires. Although selection biases are possible, they were likely to be similar in the samples compared over time. How much selection bias the low response rates might have caused is difficult to assess because IRB stipulations related to the Health Insurance Portability and Accountability Act (HIPAA) precluded characterizing nonresponders. And although targets for medical record audits were met, the medical record samples did not include enough patients with depression or upper respiratory tract infection to calculate reliable quality of care measurements for these conditions. Finally, the practices chosen for the NDP included few low-income and minority patients, limiting the study’s generalizability for those populations.
Implications
Answering the 2 research questions posed in the Introduction—whether adoption of NDP model components and patient outcomes are superior with facilitation, and whether adoption improves patient outcomes—is important for deriving insights for the PCMH’s future development. One robust finding was the dissociation between implementation of the NDP model’s predominantly technological components and improvements in patient-rated primary care attributes. In other words, some practices improved their patient experiences while also implementing the NDP components, whereas the majority did not, as shown in the figure. As shown in the article by Nutting et al16 in this supplement, adaptive reserve, a measure of participatory leadership and learning organization, 37 improves a practice’s ability to implement the NDP model of the PCMH.
Slippage in patient-rated primary care attributes after the NDP began suggests that technological improvements may come at a price. The intense efforts needed to phase in new technology may have temporarily distracted attention from interpersonal aspects of care. For example, attending to an EMR in an examination room may interfere with the process of delivering patient-centered care,38 or the rapid-fire implementation of many model components may exhaust practice members’ energy for improving patient experience.39 It is also possible that practice change is difficult for patients, particularly in practices that have long-standing relationships and established functional routines for meeting patient needs. Because of this potential tradeoff, future PCMH evaluations will need to consider both disease-focused and patient-centered outcomes.40
Given that both facilitated and self-directed groups were successful at adopting NDP model components, why did outcomes change so modestly in the NDP? One potential explanation is that it takes time and additional work to turn a new process into an effective function.41 An example will illustrate the distinction. A disease registry is a process in which all patients with a specified disease are listed and tracked. Its function, however, is to improve outcomes for that disease, a goal that will not be accomplished simply by creating the registry. After the registry is created, practices must pay active attention to what the registry is telling them about their performance and must then make iterative changes designed to improve results. A hypothesis we are exploring in our qualitative evaluation15,16 is whether a facilitator helps turn a new process into an effective function, through follow-up work to ensure that the process is used effectively or by increasing the practice’s global effectiveness at adopting change.
A final set of implications emerges from comparing NDP results with those of other early demonstration projects.42 In one respect, the NDP was the most ambitious of these projects, attempting to implement nearly all of the PCMH attributes (except for payment reform) that were subsequently set forth in the joint statement by major primary care organizations,1 whereas other demonstration projects have focused more narrowly on a limited set of attributes, such as improving coordination of care,6,43,44 patient-clinician relationships,7 information technology for decision support,7,43 or connection to community resources.6 One hypothesis, therefore, to account for the findings of the NDP is that the other projects’ more limited implementation plans (especially those guided by progress on a set of clinical outcomes) may lead to better results than more global PCMH implementation plans. But it would be misleading to compare the NDP’s results with those achieved elsewhere without acknowledging other projects’ strong focus on redesigning delivery systems to support the core functions of primary care. Through strategies such as investing in additional clinicians to allow smaller patient panels and longer visits,7 hiring shared case managers, launching community-wide quality improvement initiatives, and aligning local payers around strategic payment reforms,6 these other projects critically altered the external determinants of practices’ success or failure. Without these essential reforms of the delivery system, practices’ own attempts to deliver on the promise of a medical home are unlikely to succeed. We therefore caution those interpreting the NDP results that we have evaluated only a single specific model for PCMH implementation and for a relatively short time.
CONCLUSIONS
Developing practices into PCMHs is a complex endeavor that requires substantial time, energy, and attention to potential trade-offs. In the NDP, 2 years of effort yielded substantial adoption of PCMH components, although there was modest impact on quality of care and no improvement in patient-rated outcomes. Given how much was asked of the practices, 2 years may not have been enough time to pursue the iterative cycles of learning and testing improvements that are necessary to realize substantial gains in the patient experience.
Any interpretation of NDP findings must bear in mind what its strategy for change did and did not include. The change strategy focused on practices, asking them to implement an ambitious array of best practices from the NDP model of the PCMH. Judged on this basis, practices were successful in adopting a large number of new model components. What the change strategy did not include, however, were interventions to alter the delivery system beyond individual practices.45 Without fundamental transformation of the health care landscape that promotes coordination, close ties to community resources, payment reform, and other support for the PCMH, practices going it alone will face a daunting uphill climb.
Acknowledgments
The NDP was designed and implemented by TransforMED, LLC, a wholly-owned subsidiary of the AAFP. We are indebted to the participants in the NDP and to TransforMED for their tireless work. The authors also want to recognize the efforts of Luzmaria Jaén and Bridget Hendrix, who provided considerable support with medical record abstraction, data entry, and coordination of survey collection.
Footnotes
-
Conflicts of interest: The authors’ funding partially supports their time devoted to the evaluation, but they have no financial stake in the outcome. The authors’ agreement with the funders gives them complete independence in conducting the evaluation and allows them to publish the findings without prior review by the funders. The authors have full access to and control of study data. The funders had no role in writing or submitting the manuscript.
-
Disclaimer: Drs Stange, Nutting, and Ferrer, who are editors of the Annals, were not involved in the editorial evaluation of or decision to publish this article.
-
Funding support: The independent evaluation of the National Demonstration Project (NDP) practices was supported by the American Academy of Family Physicians (AAFP) and The Commonwealth Fund. The Commonwealth Fund is a national, private foundation based in New York City that supports independent research on health care issues and makes grants to improve health care practice and policy.
-
Publication of the journal supplement is supported by the American Academy of Family Physicians Foundation, the Society of Teachers of Family Medicine Foundation, the American Board of Family Medicine Foundation, and The Commonwealth Fund.
-
Dr Stange’s time was supported in part by a Clinical Research Professorship from the American Cancer Society.
-
Disclaimer: The views presented here are those of the authors and not necessarily those of The Commonwealth Fund, its directors, officers, or staff.
- Received for publication November 18, 2009.
- Revision received March 1, 2010.
- Accepted for publication March 26, 2010.
- © 2010 Annals of Family Medicine, Inc.