Abstract
PURPOSE Our goal was to describe an approach to patient-centered medical home (PCMH) measurement based on delineating the desired properties of the measurement relative to assumptions about the PCMH and the uses of the measure by Blue Cross Blue Shield of Michigan (BCBSM) and health services researchers.
METHODS We developed and validated an approach to assess 13 functional domains of PCMHs and 128 capabilities within those domains. A measure of PCMH implementation was constructed using data from the validated self-assessment and then tested on a large sample of primary care practices in Michigan.
RESULTS Our results suggest that the measure adequately addresses the specific requirements and assumptions underlying the BCBSM PCMH program—ability to assess change in level of implementation; ability to compare across practices regardless of size, affiliation, or payer mix; and ability to assess implementation of the PCMH through different sequencing of capabilities and domains.
CONCLUSIONS Our experience illustrates that approaches to measuring PCMH should be driven by the measures’ intended use(s) and users, and that a one-size-fits-all approach may not be appropriate. Rather than promoting the BCBSM PCMH measure as the gold standard, our study highlights the challenges, strengths, and limitations of developing a standardized approach to PCMH measurement.
INTRODUCTION
Interest in the patient-centered medical home (PCMH) model of primary care has increased considerably in recent years, with PCMH implementation under way in a variety of settings across the country.1–3 These changes are supported by a wide range of public and private sponsorship including the recently passed federal health care reform bill, the Patient Protection and Affordable Care Act (HR3590).1 Although early evaluations indicate general support for the potential of the PCMH to affect a variety of patient-related outcomes, these findings are subject to debate as many are based on studies in which PCMH was not directly measured, or was measured using metrics based on different assumptions about the PCMH and varying levels of testing for reliability and validity.4–6
A problem confronting researchers and policy makers is thus how to measure PCMH implementation in a standardized, cost-effective manner in order to generate valid, comparable evidence about its effectiveness. Here, we present one approach to PCMH measurement that has been developed and implemented on a large scale in Michigan through Blue Cross Blue Shield of Michigan (BCBSM). Our goal is not to promote this instrument as the gold standard for measuring PCMH, but to lay out the process and delineate the assumptions associated with the instrument’s development, aiming to better inform the field about the challenges, strengths, and limitations of developing a standardized approach to PCMH measurement.
PCMH Measurement Challenges
To date, approaches for measuring PCMH are widely divergent and range from creating a composite medical home score based on patient responses to survey questions assessing various relevant topics (eg, access, continuity, comprehensiveness, patient- or family-centered care, and coordination) to physician practices performing self-evaluations using tools such as the Medical Home Index.7,8 In other cases, investigators have conducted interviews and site visits to determine the number of medical home elements in place in practices.9 Each of these approaches has strengths and limitations.
Even the most carefully designed PCMH measures are subject to considerable variation in how PCMH components are defined and operationalized. For example, one of the key PCMH components, care coordination, has been measured as adequate communication among providers, receipt of assistance in coordinating care, assistance interpreting laboratory results, use of care plans for managing chronic conditions, family participation in community-based resources, and combinations thereof.7–11 Such variability may reflect the absence of a solid operational framework underlying PCMH (rather than simply a list of attributes) that would help identify how key elements of PCMH are designed to function, as well as the relevant outcome(s) for each element.
Need for a Better PCMH Measurement Tool
The inconsistency in defining what constitutes the PCMH, variation among patient and physician PCMH assessment tools, limited evaluation of measurement validity and reliability, and related debates regarding appropriate weighting of PCMH dimensions highlight the need for further work in this area. Without a validated, reliable, and standardized PCMH measurement tool, evaluations of PCMH progression among primary care practices, and the relation of PCMH progression with cost and quality outcomes will be extremely challenging. Data compromised by problems in measurement preclude best assessment of the benefits and progress made in PCMH implementation. This shortcoming puts at risk the substantial resources already invested by both the payer and provider community in PCMH demonstration projects, and could inhibit policy makers from aligning appropriate incentives.
Cognizant of these issues, we endeavored to take an approach to measurement of PCMH implementation that was designed to build on previous measures but also addresses some limitations of those measures. Below, we discuss the BCBSM process of PCMH self-assessment by primary care practices, procedures to evaluate validity and reliability of these data, development of PCMH measures using these self-assessment data, and application of the PCMH measures to a large and heterogeneous group of practices affiliated with a variety of health systems across Michigan.
METHODS
BCBSM PCMH Self-Assessment
A key premise of BCBSM’s PCMH measurement approach was that the process of practice transformation to a PCMH is an ambitious, long-term endeavor. Any PCMH measurement tool needed to be capable of assessing incremental progress toward full implementation of the PCMH when administered repeatedly. A second assumption was that health care delivery is a local activity and that payer-driven, top-down approaches to developing criteria for PCMH would not be consistent with the needs and realities of care delivery “on the ground.” In practice, this assumption meant that the functions and capabilities associated with the PCMH were developed together with practicing physicians. Because of this codevelopment, practices would be more willing to be held accountable for PCMH implementation progress. A third assumption was that the process of transforming a practice could vary markedly and that there was no one, prescribed route to achieving full implementation of the PCMH model. This assumption meant that the assessment needed to capture multiple implementation scenarios for the PCMH, and that incentives and recognition would not be biased or weighted toward any particular approach. Finally, the measurement approach had to be implementable across a large number and wide variety of primary care practices, allowing for standardized comparisons across practices (see Supplemental Appendix 1 available online at http://annfammed.org/content/11/Suppl_1/S74/suppl/DC1 for a description of the context of the BCBSM PCMH program).
The Urban Institute recently conducted a comprehensive, comparative evaluation of 10 PCMH assessment instruments that form the basis for many measures of PCMH.12 The instruments reviewed ranged from national recognition efforts such as those sponsored by the National Committee for Quality Assurance (NCQA), the Accreditation Association for Ambulatory Health Care (AAAHC), or Joint Commission, to those that were specific to given states (Minnesota, Michigan, Oklahoma). The authors concluded that all these instruments contained different arrays of PCMH capabilities and that different instruments tend to emphasize some capabilities more than others. According to the report, compared with the other assessment instruments reviewed, the BCBSM instrument places greater emphasis on care coordination, quality measurement, population management, and patient engagement and self-management. Moderate emphasis is placed on health information technology, and little relative emphasis is given to policies and comprehensiveness of care.12 Other distinguishing features of the BCBSM instrument were its in-depth concentration in a few areas (as opposed to relatively narrow coverage of many areas) and more granular and specific performance expectations incorporated in standards for achieving PCMH capabilities. These features generally conform to the intended use of the BCBSM tool as a multipurpose instrument, suitable for designation/recognition, practice improvement, and research, rather than recognition alone.
BCBSM organized the PCMH model into 13 distinct domains with 128 discrete capabilities to provide a framework for measuring incremental progress toward full implementation (see Table 1 for a summary and Supplemental Appendix 2, available online at http://annfammed.org/content/11/Suppl_1/S74/suppl/DC1 for a complete list of domains and capabilities). The domains were based on the Joint PCMH Statement of the medical societies. Subject matter expert teams for each domain (physicians and nurses) were formed to identify the key components of that domain and “translate” them into discrete, measurable steps to identify and support incremental progress. Interpretive guidelines detailing case definitions for each of these capabilities were developed through iterative discussions between BCBSM and physician organizations (POs). On the basis of feedback from these organizations, interpretive guidelines are continually refined, and several new capabilities have been added (eg, advance planning, palliative care). POs submit these data semiannually in June and December for each of their practices in the Self-Reported Database, an electronic administrative reporting tool. Completing the PCMH tool generally takes 4 to 6 hours for a new practice and 2 to 3 hours for a veteran practice. The POs typically ask the practices to submit their information about a month before the due date. Depending on the size of the PO, the process of aggregating the information into the final report can take several days to 2 weeks.
Validation Process
Because there was no existing gold standard for assessing PCMH implementation, and because the PCMH program has grown from 2,170 participating primary care practices in 2008 to 2,510 practices in 2012, we devoted considerable effort to validating the assessment instrument and data collection approach. A field team consisting of health care workers with previous experience in process improvement and practice transformation was recruited to conduct site visits with practices to facilitate practice transformation and to validate capability reporting. During the first year of PCMH data collection in 2008, BCBSM field staff conducted 114 site visits to POs and practices. No systematic selection of sites was used as these visits were regarded as mutual learning experiences focused on refining the definitions of the PCMH capabilities. Further reinforcing the need for greater specificity in the interpretive guidelines, the research team found a relatively weak association between PCMH assessment data collected directly from a random sample of practices and the corresponding data provided by the POs. This feedback resulted in not only more detailed PCMH interpretive guidelines that are updated annually by BCBSM with guidance from physicians, but also process improvements such as coordinated site visits with field staff to ensure reliability across field team members in their interpretation of capabilities and weekly discussions to share insights gained from these site visits.
BCBSM subsequently conducted 235 site visits between June 2009 and June 2010 to systematically validate capability reporting and assess reliability in reporting across the contributing POs. In 2011, a total of 233 practices were randomly selected for site visits using a 1-stage cluster sampling design. All PCMH capabilities reported to be in place were evaluated for overreporting as incentive-based reimbursements and fee enhancements tied to PCMH implementation were more likely to contribute to overreporting rather than underreporting. Site visit reviews of PCMH capabilities usually covered 50 to 110 capabilities and were on average 4 hours long. As a result of these increasingly systematic steps to validate and improve the reliability of the assessment process, the concordance between PO self-reporting of capabilities and field team assessment improved dramatically; 91% of the capabilities reported by POs were deemed operational by BCBSM staff during the 2011 validation process. Concordance meant that practice staff was able to demonstrate reported functional PCMH capabilities, such as a fully populated patient registry, to the field team during the site visit.
For the 2012 validation process, 500 sites were selected randomly using a 2-stage cluster sampling design to obtain a sample of practices stratified by PO. The first stage was primary care practice units; the second was sampling of up to 40 capabilities within the practice units selected. Each of the 40 POs had at least 3 practices selected with the remainder selected proportional to size of the PO. This design was implemented to reduce and standardize the duration of site visits and minimize the impact on the operating routines of the practices, as well as to increase the number of practices the field staff could visit. This approach also provided the field staff with time to assist practices in identifying strategies to overcome implementation barriers. By the end of 2012, approximately 820 practices, or one-third of participating practices, had been visited at least once since the PCMH program began.
Measurement of PCMH Implementation
Using the self-reported PCMH data, the research team developed a practice unit–level PCMH score to reflect the degree of PCMH implementation across the 13 domains of function. This measure was constructed by combining self-reported PCMH capability information from the Self-Reported Database in a multistage process. In the first stage, capabilities reported as “fully in place” were assigned a value of 1, while capabilities reported as “not in place” were assigned a value of 0. When capabilities had multiple gradients, we calculated the capability score as a proportion of the maximum gradient. For example, the Patient-Provider Partnership domain asked respondents to identify the percent of their patient population who had established documented patient-provider partnerships using the following options: 10%, 30%, 50%, 60%, 80%, or 90%. A response of 30% implementation on this patient-provider partnership communication was assigned a value of 0.33 (0.3/0.9). A total of 4 capabilities had gradients, accounting for 12 of the 128 capabilities in the interpretive guidelines.
In the second stage, all capability scores within a domain were summed, and then divided by the maximum capability score possible within that domain to represent the extent to which that domain was fully implemented. In the final stage, we calculated the overall PCMH implementation score as the mean of all 13 domain-specific percentage scores. Thus, a 1-unit change in the overall PCMH score reflects the difference between full PCMH implementation (1.0) and no PCMH implementation (0.0). This method intentionally gives equal weight to each PCMH domain to reflect the unknown relative importance of each, thereby not giving greater weight to domains with a greater number of capabilities. The scoring system also allowed for different approaches to PCMH implementation without giving greater or lesser emphasis to any particular sequencing of activities.
Baseline and Change in PCMH Implementation
For illustrative purposes, we calculated the PCMH implementation score for both the December 2009 and June 2010 reporting cycles. In addition to reporting the presence of capabilities, POs also report the estimated date of implementation for those capabilities. We used the dates of implementation reported for the June 2010 reporting cycle to correct for any overreporting of capabilities in the December 2009 time period due to differences in interpretation of capability case definitions from the interpretive guidelines and to account for any changes made to the interpretive guidelines between reporting periods. This correction creates greater comparability of the 2 measurement time points in order to assess change in the degree of PCMH implementation. Change in implementation was recorded as the difference between the June 2010 and the December 2009 PCMH implementation scores.
To assess the ability of the PCMH implementation score to capture differences in implementation between practices, we plotted the distribution of the PCMH implementation score and examined the variability in scores between practices. As practice size has been commonly viewed as a strong predictor of level of implementation, we examined the relationship between these 2 measures to assess the extent to which this score captured the impact of practice size differences and the variability in implementation by practice size.13,14
Finally, we assessed the distribution of implementation scores within each PCMH domain to identify PCMH domains or capabilities that the population of practices had focused on implementing during the first 2 years. In addition to the individual PCMH domain scores, we measured the initiation of capabilities (the presence of at least 1 capability in a domain) and the extent to which practices completed implementation of all capabilities within each domain. These measures are intended to demonstrate the variability across practices in their initial implementation efforts.
RESULTS
Among the 2,494 primary care practices participating in the BCBSM PCMH program, 59.6% were solo physician practices, 23.4% had 2 to 3 physicians, 8.8% had 4 to 5 physicians, and 8.1% had 6 or more physicians. Figure 1 demonstrates the relationship between the PCMH implementation overall score and practice size in December 2009. The overall mean PCMH score was highest in practices with 6 or more physicians and decreased as the number of physicians in the practice decreased. There was also substantial practice variability in the PCMH implementation score within each category of practice size.
Table 2 summarizes the PCMH capabilities reported as implemented by the practices for the time periods of December 2009 and June 2010, including the overall PCMH implementation score and domain-specific implementation scores. Overall, 76.0% of practices had implemented at least 1 PCMH capability as of December 2009, and 89.4% had implemented at least 1 capability as of June 2010. The mean PCMH implementation score rose from 0.18 in December 2009 to 0.31 in June 2010, an overall average change in implementation score of 0.13 for the 6-month time period between reporting cycles. The median PCMH implementation score rose from 0.11 to 0.27 during this same time period.
Nearly one-half (44.5%) of participating practices had completed the 2 capabilities within e-prescribing by December 2009, and this value rose to 59.3% in June 2010. Although e-prescribing was the domain with the highest rate of completion, more practices had initiated capabilities within the domains of Extended Access, Test Tracking, Individual Care Management, Preventive Services, Linkage to Community Services, and Specialist Referral Process as of June 2010. The Preventive Services domain was the only other functional domain where at least 10% of practices had implemented all the capabilities, increasing from 3.3% of all practices in December 2009 to 14.1% of all practices in June 2010.
Among the functional domains initiated in 2008, the highest mean implementation scores occurred within e-Prescribing, Test Tracking, and Extended Access. Although the mean implementation score within the Patient-Provider Partnership domain was higher than the mean score for the Patient Registry domain, a greater proportion of practices had initiated implementation in the latter, highlighting the variation in how practices approached PCMH implementation. In some domains, fewer practices initiated implementation, but those who had started were able to make substantial progress within that domain. In other instances, many practices initiated implementation within a domain, but because of the challenges inherent in realizing the capabilities within that domain, progressed more slowly.
Figure 2 illustrates the domain-specific variability in PCMH implementation among practices. Of the functional domains initiated in 2009, greatest mean implementation occurred within Preventive Services, Linkage to Community Services, and Specialist Referral, whereas little implementation had occurred within Care Coordination and Patient Web Portal. Self-Management Support was the functional domain where the fewest practices had initiated implementation.
Table 3 summarizes the change in implementation that occurred over the 6-month period between the December 2009 and June 2010 reporting cycles for practices with reported capabilities during both time periods. The mean overall change in PCMH implementation was 0.13. Importantly, substantial contributions of specific domains accounted for much of this change. During this 6-month cycle, the greatest change in implementation occurred within the Preventive Services domain (0.22) and Specialist Referral Process domain (0.21), whereas Patient Web Portal saw the least change (0.04). Across domains, very few practices saw declines in their domain-specific implementation. The majority of practices did not concentrate implementation efforts in any particular domain, suggesting widely divergent approaches to developing PCMH capabilities.
DISCUSSION
We describe here the assumptions and processes for developing a measure of PCMH implementation suitable for designation, practice improvement, and research purposes. Descriptive analyses of the measure using data from primary care practices in Michigan suggest that it adequately addresses the specific requirements and assumptions underlying the BCBSM PCMH program—ability to assess change in level of implementation; ability to compare across practices regardless of size, affiliation, or payer mix; and ability to assess implementation of PCMH through different sequencing of capabilities and domains. Equally important, these same attributes and the careful validation process make the measure of PCMH implementation suitable for research purposes, especially in studies designed to assess the relationship of PCMH implementation with outcomes related to cost and quality, or in conjunction with other data, to identify the types of practices that are better or worse candidates for this type of transformational change.
This measurement approach provides additional programmatic value by describing not only implementation that has occurred, but also opportunities for additional PCMH implementation. Such a measurement approach may be an appropriate method to assess the long-term time and resource investments needed to fully realize PCMH in practice settings even when practices have already demonstrated substantial achievement in implementing PCMH capabilities. Policy makers may find this measurement approach (or similar approaches) useful for monitoring the progressive adoption of PCMH infrastructure when considering the initiation of PCMH-related programs. Despite these positive results, further development of the measure is warranted. For example, sensitivity analysis based on different weightings of the PCMH domains (vs equal weighting) might be performed to see whether and how results would be affected by alternative scoring approaches.
Our example illustrates that approaches to measuring PCMH should be driven fundamentally by the intended use(s) and users of the measure. We make no claim that our PCMH measure is suitable for all purposes or for all audiences. Instead, our goal was to illustrate a process by which PCMH measurement was developed to meet a specific set of assumptions about PCMH. Given the complexity and somewhat ambiguous operational qualities of PCMH, it is probably not realistic to expect a universal, one-size-fits-all measure; however, we strongly encourage any study or assessment of PCMH to articulate its goals with great clarity, and to establish a clear connection between those goals and the attributes of the metrics used to evaluate them.
Acknowledgments
We wish to thank BCBSM for generously providing data for this study.
Footnotes
-
Conflicts of interest: authors report none.
-
Funding support: This research was funded by a grant from the Agency for Healthcare Research and Quality (R18 RFA-HS-10-002).
-
Disclaimer: The views expressed in this manuscript are exclusively those of the authors, and do not represent the opinions or position of any organization.
- Received for publication May 25, 2012.
- Revision received October 12, 2012.
- Accepted for publication October 29, 2012.
- © 2013 Annals of Family Medicine, Inc.