Abstract
PURPOSE We wanted to determine how much it costs primary care practices to participate in programs that require them to gather and report data on care quality indicators.
METHODS Using mixed quantitative-qualitative methods, we gathered data from 8 practices in North Carolina that were selected purposively to be diverse by size, ownership, type, location, and medical records. Formal practice visits occurred between January 2008 and May 2008. Four quality-reporting programs were studied: Medicare’s Physician Quality Reporting Initiative (PQRI), Community Care of North Carolina (CCNC), Bridges to Excellence (BTE), and Improving Performance in Practice (IPIP). We estimated direct costs to the practice and on-site costs to the quality organization for implementation and maintenance phases of program participation.
RESULTS Major expenses included personnel time for planning, training, registry maintenance, visit coding, data gathering and entry, and modification of electronic systems. Costs per full-time equivalent clinician ranged from less than $1,000 to $11,100 during program implementation phases and ranged from less than $100 to $4,300 annually during maintenance phases. Main sources of variation included program characteristics, amount of on-site assistance provided, experience and expertise of practice personnel, and the extent of data system problems encountered.
CONCLUSIONS The costs of a quality-reporting program vary greatly by program and are important to anticipate and understand when undertaking quality improvement work. Incentives that would likely improve practice participation include financial payment, quality improvement skills training, and technical assistance with electronic system troubleshooting.
INTRODUCTION
Clinicians are increasingly asked to be externally accountable for the quality of their work and the health of the patient populations they serve.1,2 Data-based quality improvement programs have been the norm in US hospital settings for years,3,4 and in 2007 Medicare introduced a program that offers financial incentives for adequately reporting on a set of quality measures pertaining to outpatient care. Although participation in this and other quality-data–reporting programs have thus far been voluntary, it is likely that such programs, especially those increasingly supporting the patient-centered medical home model, will become the standard for primary care quality improvement and associated clinician reimbursement. Participation in certain quality improvement activities also provides a mechanism for satisfying the quality improvement requirement for maintenance of board certification by the American Board of Family Medicine.5
Despite the enthusiasm for quality improvement, reporting activities have occurred with relatively little regard to the challenges primary care practices face in collecting and reporting requested data. These challenges include inadequate data collection and reporting systems, multiplicity and inconsistency of measures required by different quality improvement organizations, the need to converge or reorganize multiple paper and electronic data sources, and insufficient financial resources to maintain office systems and educate office personnel.6,7 In 2006 the US Agency for Healthcare Research and Quality (AHRQ) held a national conference of stakeholders in the area of reporting performance data by office practices. In addition to the system issues identified as barriers to reporting performance data, the AHRQ report suggested that the cost associated with reporting performance measures, particularly when juxtaposed against the shrinking profit margin of primary care practices,8,9 may be a major factor underlying many of the identified barriers to adoption.7
To address this issue, we sought to determine the costs incurred by a sample of primary care practices when implementing and maintaining participation in 4 quality-reporting programs. We studied 8 demographically diverse primary care practices in North Carolina, each of which participated in at least 1 of the quality-reporting programs under study. Costs were estimated for both the practices themselves and for on-site assistance provided by the quality-reporting program where applicable.
METHODS
Programs Studied
The Physician Quality Reporting Initiative (PQRI) (http://www.cms.hhs.gov/pqri/) is Medicare’s voluntary pay-for-reporting program that began as a 6-month pilot on July 1, 2007. Clinicians chose from 74 quality measures, reported on at least 3, and submitted data as “G” codes on claim forms. To qualify for an incentive of up to 1.5% of the Medicare allowable charges during the reporting period, practices were required to report data on a minimum of 80% of visits applicable to each chosen measure.
Improving Performance in Practice (IPIP) (http://www.ncafp.com/home/programs/ipip) is a state-based quality improvement initiative with initial pilot programs in North Carolina and Colorado. The IPIP program provided consultants to help practice staff implement quality improvement, redesign workflow, and collect and report data specific to diabetic or asthmatic patients. Program-supported quality data measures were drawn from national organizations, such as the National Quality Forum, the National Committee for Quality Assurance, and the Bureau of Primary Health Care. Disease registry software was provided for interested practices, and a total of $2,000 was provided for participation and submission of the first data report.
Bridges To Excellence (BTE) (http://www.bridgestoexcellence.org) is a not-for-profit organization that designs and creates programs to encourage quality improvement in primary care. The BTE program was implemented by Blue Cross/Blue Shield of North Carolina as a 3-year pilot program. Physicians were able to achieve recognition and financial rewards in 3 distinct areas: diabetes, cardiac disease, and office system innovation.
Community Care of North Carolina (CCNC) (http://www.communitycarenc.com/) is an integrated Medicaid program. CCNC provides case managers and offers a per-member per-month financial incentive to practices for disease management services for assigned Medicaid patients. To monitor quality, CCNC conducts and pays for annual medical record audits of diabetes and asthma care, creates summary reports of quality data, hosts regional network-wide meetings, and assembles clinical tools to assist practices.10
Practice Selection
The project team purposively selected 8 practices in North Carolina that were successfully participating in at least 1 of the above quality-reporting programs. These practices were diverse by size, ownership, specialty, location, and medical record format. To develop a list of potential study practices, we conducted a telephone survey of more than 100 practices in the North Carolina Network Consortium and solicited recommendations from key informants in stakeholder organizations involved in quality data work. Nine practices were approached to generate the 8 study practices. The final sample included 4 for-profit practices, 3 non-profit practices, and 1 teaching practice.
Data Collection and Cost Estimation
Data collection consisted of preparatory telephone interviews; 1 or 2 practice site visits totaling 4 to 6 hours per practice which involved the 2 project co-directors (P.D.S. and J.H.), an economist (S.S.) and/or finance graduate student (S.H.), and a qualitative data collector (T.W. and/or S.Z.); questionnaires on the practice environment administered to office personnel; and post-visit communications required to clarify questions and obtain feedback on estimates.
To collect practice costs, we first generated program-specific lists of reporting requirements and the steps involved in data generation and reporting. These lists were then incorporated into program-specific and practice-specific data collection instruments, which were tested in 2 practices and reviewed by the project’s national advisory committee before fielding.
In-practice expenses were defined as newly created costs resulting from program participation. They were divided into personnel costs and nonpersonnel costs. Personnel costs included work time directly related to collecting data elements, creating new office procedures to enhance data capture, reporting performance data, and patient contact efforts. We included costs for supervision, management, billing services, consultants, and clerical support, as described by Dodoo et al.11 Fringe benefits were estimated at 22%, a figure that is consistent with reported rates.12 Nonpersonnel costs include building space, depreciation, computer hardware, software, paper products, office supplies, and copying costs directly attributed to quality measurement and reporting functions.
We estimated the cost of services provided on-site by personnel supported by the respective quality improvement organizations. We also collected information on the incentive payments provided for successful program participation. Incentive payment estimates were based on the best available information when funds had not yet been received. Costs incurred by the program’s organization are relevant in estimating the overall societal cost of quality improvement involvement by medical practices.
Costs were estimated for 2 phases of program participation: implementation and maintenance. Implementation phase costs included professional decision-making time (eg, webinars, meetings), staff and leadership training, office tool development, and other work leading up to the submission of the first data report. Maintenance phase costs were those involved in collecting and reporting data on an ongoing basis, during which only minor changes were required for continued participation.
To facilitate within-program comparisons, cost data are expressed as the total cost to the practice and this same cost divided by the number of full-time clinicians. We use the term “per-clinician full-time equivalent (FTE)” to take into account clinicians who worked less than full-time. Estimated incentive reimbursement dollars are also provided using the “per-clinician FTE” denominator.
The research protocol was approved by the Institutional Review Board of the University of North Carolina.
Qualitative Data Collection and Analysis
We gathered and analyzed qualitative data from our standardized and open-ended interviews and from the surveys of practice staff, clinicians, and administrators. Components of the interviews included a history of quality improvement efforts in the practice, information on leadership and practice team function, and barriers to current program implementation. We digitally recorded the interviews and took notes during the interviews. These recordings were later transcribed and read by the study team, who discussed and reached consensus on identified themes and narrative examples that best typified the themes. Although those themes and detailed results are not presented in this manuscript, much of the explanatory information provided throughout the results section was obtained during these interviews.
RESULTS
Table 1⇓ details the characteristics of the participating primary care practices, and Table 2⇓ displays the cost categories generated by this process. Table 3⇓ summarizes estimated practice-level implementation and maintenance costs, program costs, and incentive payments provided by the program. Table 3⇓ also lists each practice’s major cost sources.
Physician Quality-Reporting Initiative
Among the 4 practices studied, practice-level implementation costs ranged from $920 to $22,200, with per-clinician FTE costs ranging from $368 to $11,100. Estimated annual practice level maintenance phase costs ranged from $207 to $12,200 or from $83 to $4,329 per-clinician FTE. Major cost sources included planning meetings, clinician time required to gather and code data, information technology system modification, and staff time to verify the accuracy of coding by clinicians. PQRI provided no on-site assistance, so all costs were incurred by the practice. Incentive payments were estimated to range between $0 and $7,000 per practice.
The main sources of variation among the 4 PQRI practices related to the different nature of the work required by practices A, D, and H to get their billing or electronic health record systems to correctly communicate the electronic elements to Medicare. Practice D, which had the highest implementation costs, did not have on-site information technology personnel or the assistance of an affiliated larger organization; as a result, an estimated 462 hours of staff time was required to work through interoperability processes with their billing staff, revenue management company, laboratory vendor, and Medicare personnel. In the maintenance phase, costs to practice D continued to be high because of the need to pay for external information technology assistance. In contrast, practice B’s costs were lower because it used a reformatted paper superbill to submit data to Medicare. Another major source of variation was the amount of practice change required to gather the data required for coding. For example, practice B participated in IPIP and had readily available diabetes data for PQRI reporting, whereas practices D and H generated new and persistent costs resulting from the increased clinician time required to enter data elements.
Improving Performance in Practice
Practice-level implementation costs for the 3 practices that participated in IPIP ranged from $2,689 to $18,210 per practice, or $1,428 to $3,035 per-clinician FTE. On-site costs incurred by the program ranged from $1,000 to $2,545 per practice. Estimated annual practice level maintenance costs ranged from $4,229 to $11,563 per practice, or $1,927 to $4,229 per-clinician FTE, plus an additional $141 to $1,673 contributed by the program. Major cost sources included staff meetings and training, maintaining a patient registry, data entry or abstraction, and information technology system modification.
Practice F incurred the highest IPIP participation costs. Of these, $9,400 represented meetings attended by clinicians and key administrators, which reflected the practice’s use of IPIP as the cornerstone of a practice-wide launch into formal quality improvement work. Another notable expense was attributed to staff time necessary to create and continuously execute an electronic work-around to capture cholesterol values in a format recognizable for generating reports in their electronic health record. Practice C had the highest per-clinician maintenance costs, which reflected hours that this solo practitioner spent monthly abstracting data elements from the paper charts and entering them into an electronic registry.
Bridges to Excellence
For the BTE Diabetes Physician Recognition Program (DPRP), estimated implementation costs incurred by the 2 practices, respectively, were $4,270 and $8,658 overall and $488 and $618 per-clinician FTE. Because of the nature of the BTE program, a maintenance phase arguably may not exist; once the data are collected and submitted, program engagement ends. Practice A, however, submitted data in one recognition cycle for some of their physicians and subsequently submitted data on their remaining physicians in another cycle. We therefore considered the second submission as a maintenance phase. For that practice, estimated per-clinician FTE maintenance costs were approximately one-third those of implementation. The main source of variation between the 2 participating BTE practices involved practice A’s decision to perform an internal audit to double-check data accuracy before submission. Unfortunately, practice G was unable to successfully complete the implementation phase because their quality improvement nurse’s laptop computer crashed, destroying data from 100 records that had taken the nurse nearly 80 hours to compile. Despite having paper backup for these data, the practice decided that the incentive to continue was not worth the effort required.
One practice that participated in the BTE DPRP program decided to also participate in the BTE Physician Practice Connection (PPC) program. Estimated costs of the PPC work were $11,294. This effort took approximately 85 hours of administrative staff time for data collection. Other costs were attributed to meeting times.
Although BTE staff assistance was available, neither study practice relied on it. The estimated incentive payment was considerable for practices with large numbers of patients enrolled in Blue Cross/Blue Shield. For practice A, the estimated DPRP incentive payment was $7,500 in year 1 and $12,000 in year 2, and the estimated PPC incentive payment was $65,000 in year 1 and $35,000 in year 2. Practice G, which did not successfully complete the program, received no incentive payment.
Community Care of North Carolina (CCNC)
The practice costs of participating in the CCNC quality program were relatively low compared with other programs because most of the work consisted of annual chart audits performed by CCNC staff. Thus, among the 6 participating practices, the implementation costs ranged from $563 to $1,865 per practice, or $133 to $563 per-clinician FTE, and the annualized maintenance costs ranged from $146 to $2,954 per practice, or $58 to $360 per-clinician FTE annually. Program in-practice costs ranged from $261 to $1,266 for implementation and $197 to $5,477 for maintenance. In qualitative interviews, the practices we studied expressed less involvement in CCNC than in the other programs, because their participation was largely passive. Nonetheless, staff from several practices noted how their experiences with CCNC raised awareness of care quality at the population level and motivated leadership staff to engage in other quality-monitoring programs.
The main source of variation in costs involved the amount of support that CCNC network case managers provided to the study practices. For example, the much higher costs per-clinician FTE in practice G was attributed to having the regional CCNC case manager housed within the practice setting. This geographic proximity led to the case manager functioning much like a quality improvement counselor, providing updates and training on CCNC clinical tools and guidelines to practice staff.
DISCUSSION
Mandatory reporting of quality data by primary care practices and payment based on quality indicators are developing rapidly as policy initiatives in the United States.5,13 In this article, we report and compare the costs incurred by 8 primary care practices participating in 4 quality-data–reporting programs in North Carolina. Across these practices and programs, the major expenses included planning, training, registry maintenance, visit coding, data gathering and entry, and modification of electronic systems. Considerable variability across practices was noted, underscoring the notable challenges to performing quality improvement work in primary care.
We found substantial variation among the 4 reporting programs in the way performance measurement data elements are defined, gathered, and transmitted, all of which affects cost. Much of the variation was from practice to practice. Other important sources of variation, however, included the amount of the work shouldered by quality improvement program staff, the intensity of a program’s quality improvement focus, and the time required for quality improvement work beyond data collecting and reporting.
The lack of interoperability among information technology systems was a major problem. It was not only a large component of participation costs, but also a major source of variation between practices participating in the same programs. Such expenditures were not solely attributable to program implementation, as several programs required continuous attention from information technology personnel to maintain electronic systems and assist with work-arounds. Even software packages that were more user-friendly and supported seamless data capture and transmission at the practice level often required long hours of staff time to manipulate the data into formats understandable to external data systems.
Small practices appeared especially hard hit by program participation costs. As can be noted from Table 3⇑, practices C and D—both of which were single-physician practices—recorded the highest per-clinician costs for each of the 3 programs for which comparisons could be made. Reliance on expensive outside consultants and use of clinician time to collect and report data items accounted for much of these costs. Although hiring someone to do this work could have reduced the estimated cost, as is the case in many office innovations, much of this work was performed outside office hours and thus did not directly impact the practice’s cash flow. As with other outside-hours work activities, the cost was loss of personal time.
We found that participating in multiple programs can be either a help or a hindrance. In some practices, participation in one program made the implementation of another program easier, thus less costly. For example, worksheets to assist staff with laboratory data extraction for 1 program were able to be used in other programs. On the other hand, some practices participating in multiple programs were overburdened. These practices expressed concern about overall care quality deteriorating as a result of data management demands and an overemphasis on improving care for only certain diseases.
Our study has several limitations resulting from its design and exploratory nature. Although our practices were selected to be diverse, our findings cannot be generalized to other practices and programs. Furthermore, the participating practices were early adopters, and several had staff with quality improvement experience; so practices considering starting such work from scratch may have different experiences with costs. Also, we attempted to capture only those costs that applied to work beyond baseline office expenses, but where to draw this line was not always clear. Another limitation is our predominant focus on direct cost. We were not able, for example, to measure potential indirect costs, such as increased staff turnover resulting from the added stress imposed by programmatic demands; nor did we attempt to identify the costs incurred by programs other than in the practice setting. Because of the retrospective nature of our data collection, we relied on self-reporting to estimate the time required for quality improvement task completion. We recognize that recall and social desirability bias might affect our results; however, the use of self-report has been noted to be equally reliable for prospective methods, such as time and motion studies in estimating health care personnel time use.14 Furthermore, other than estimating incentive payments, no attempt was made to calculate financial benefits, as conducting a formal cost-benefit was beyond the scope of this project. Much additional research is therefore needed to gain a complete understanding regarding which programs and approaches are most costly, cost efficient, or beneficial overall. Despite these and other limitations, we believe that this formative work is important in a rapidly evolving quality movement environment. Although the generalizability of these results is not universal, the iterative work we performed in creating the cost categories should be generalizable to other programs and practices. We hope future research can incorporate this methodological work into evaluations of larger numbers of practices and more formal cost-benefit analyses.
To date, the question of whether participation in quality-reporting is worth the time, effort, and expense is largely unresolved. A recent systematic review concluded that little or no evidence existed to indicate that public reporting of patient care performance data of individual clinicians and practices stimulates quality improvement, or is associated with patient safety or patient-centeredness.15 Another review of 37 separate incentive plans found “almost no emphasis on quality improvements in the payment arrangements” among those reviewed.13 Yet another review found no consistent association between use of electronic health records and the quality of ambulatory care as measured by 17 different quality indicators.17 Alternatively, a large systematic review on health information technology effects on quality, efficiency, and costs in health care settings found some benefits to care quality in terms of adherence to guideline-based care, decreases in use of laboratory and radiology tests, and cost savings resulting from shorter hospitalizations associated with fewer medication errors. The bulk of this review’s information, however, came from 4 benchmark research institutions that created their own electronic systems over time, limiting extension of these conclusions to primary care practice settings implementing commercially available products over relatively short time frames.18
One thing is clear: participation in quality-reporting programs requires resources that have measurable costs. The costs appear high, especially when compared with the modest reimbursement offered by many programs. Furthermore, if the performance measurement movement expands to include more measures, costs may increase further. Some costs will likely be reduced, however, as practices become more adept in population management quality improvement; furthermore, reporting may have additional benefits, such as improved patient care and clinician satisfaction.
In light of the cost of performance measurement and reporting, programs seeking to engage primary care physicians should choose measures judiciously. Such a cautious approach is supported by a recent review of 62 ambulatory care measures, which concluded that only 20 were both evidence-based and cost-effective.19 Furthermore, since physician attitudes toward pay-for-performance are “fairly negative,”20 financial incentives that allow practices to at least recoup their costs would help improve physician acceptance. Finally, although financial incentives are certainly important, our experiences suggest that other incentives, such as computer skill training, assisting with electronic system challenges, and educating staff on office quality improvement, along with the hope of coincident improvements in patient care, are added motivators for participation in quality data and reporting programs.
Acknowledgments
This research study was conducted by the North Carolina Family Medicine Research Network (NC-FM-RN), a statewide practice-based research network whose mission is to carry out a program of ongoing research aimed at improving the health and primary medical care of persons with chronic illness. Administered by the Cecil G. Sheps Center for Health Services Research at the University of North Carolina at Chapel Hill, and codirected by Leigh Callahan, PhD, and Philip Sloane, MD, MPH, the NC-FM-RN seeks to involve diverse and representative samples of practices in its studies. To date, the NC-FM-RN has involved more than 40 practices and 8,000 individuals in research studies.
For this study, NC-FM-RN would like to thank the staff of the participating practices and quality data programs who generously gave their time and shared their experiences with our data collection team. In addition, we would like to thank Ann Lefebvre of the North Carolina Improving Performance in Practice (IPIP) project, who helped guide our sample selection and advised us on data collection, and David Lanier, MD, of the US Agency for HealthCare Research and Quality, who served as study project officer and provided valuable guidance in implementing the study and interpreting our results. Finally, we would like to acknowledge the individuals on our project advisory board, who participated in conference calls that helped shape the project: Jennifer Anderson MHSA, DMP; W. Holt Anderson; Bruce Bagley, MD; Janet Corrigan, PhD, MBA; Michael Lieberman, MD, MS; Darlene Menszer, MD; Dan Pollard, PhD, MBA ; Sarah Scholle, MPH, DrPH; W. James Stackhouse, MD, MACP; Marti Wolf, RN, MPH.
Footnotes
-
Conflicts of interest: none reported
-
Funding support: Funding was provided by task order contract No. HHSA290200710014, Agency for Healthcare Research and Quality, US Department of Health & Human Services.
- Received for publication February 7, 2009.
- Revision received July 22, 2009.
- Accepted for publication August 3, 2009.
- © 2009 Annals of Family Medicine, Inc.