Primary care clinicians face a daunting set of challenges in today’s health care environment. Health care reform is creating a constantly evolving context for providing care. Time and productivity pressures contrast with a growing set of clinical expectations and responsibilities in the patient encounter. Chronic and behavioral conditions increasingly dominate the lives of our patients, yet we have a limited set of tools for making consistent and meaningful improvement in the trajectory of these diseases. At the same time, patients can find it difficult to adopt truly effective preventive or early intervention tools. Underlying all of these issues is the role of socioeconomic and community factors in the conditions that we see.
In response, primary care researchers have sought and tested new ideas by which clinicians can meet these challenges. In so doing, these researchers have established a strong record of innovation—and of respect for persons and communities. Examples include the use of multimethod research to understand processes in primary care, the use of practice-based research networks for testing interventions in real-world settings, and the use of complexity science concepts in understanding care delivery.1–3 Primary care researchers have also been leaders in advocating the importance of respectful partnerships with communities in the search for new solutions to the challenges we face.4
The article by Shaw et al in this issue of the Annals extends this record of innovation and respectfulness of family medicine researchers.5 This research team designed a state-of-the-art, cluster randomized trial of a strategy to increase the proportion of patients in practices who had received up-to-date screening for colorectal cancer. Conducted in a practice-based research network, the intervention included multi-method practice assessment, followed by within-practice teams engaged in reflective adaptive processes strategizing improvements in screening, and by cross-practice learning collaboratives. The analysis showed a nonsignificant trend toward greater net increase in screening rates of intervention compared with control practices. Although the overall rate of screening was the primary outcome in this trial in which individual practices were the unit of study (whole practices received either the intervention or control condition), this analysis appeared to obscure important effect modification. That is, some practices in the intervention group got much better while, surprisingly, at least 1 intervention practice’s overall screening rate got much worse.
There are several possible explanations for this surprising finding. It could represent concurrent, unrelated changes in the practice, such as staff turnover. It could represent normal statistical variation—some practices randomly improve while some randomly worsen. It could reflect a delayed response to changes in screening approaches in the practice. As Shaw et al note, however, the disturbing possibility is that the worsened screening rates could represent a direct, unanticipated effect of the study intervention itself. Some elements of the intervention may have created a new dynamic in the practice or amplified dynamics previously present, not expected or understood, which detrimentally affected the practice.
Even though there is no definitive way to determine which of these possible explanations is responsible for the observed change, Shaw et al have applied ground-breaking innovation to this type of research. In going beyond reporting the overall summary analysis to further report notable practice variation in outcomes, this group has focused on a previously largely overlooked element of cluster randomized trials: group-based interventions may have varying, sometimes deleterious effects on some groups just as individual-based interventions can on individuals. Further important innovation comes from this team’s use of qualitative methods to explore possible explanations for what may be wide variation in the effects of the intervention.
Research in which groups, such as practices or communities, are the object of study traditionally compare the overall results of an intervention with results obtained with usual care, often for an aspect of care that is considered below standard. It is expected that in such cases, the intervention will lead either to improved outcomes in the groups or, if the intervention is ineffective, to outcomes equal to standard care. The possibility that the intervention may in fact worsen outcomes—or conversely markedly improve outcomes—in some of the groups has not been adequately appreciated and reported. In this regard, groups under study are similar to individuals: individuals often vary in their responses to experimental interventions.
Merton’s seminal article, “The Unanticipated Consequences of Purposive Social Action,” now 75 years old, was the first to highlight what is commonly referred to as the “law of unintended consequences.”6 This observation held that frequently well-planned and well-intentioned actions can have completely unexpected, important, and often adverse outcomes. Generally, these outcomes are believed to occur through inability to anticipate all effects of an action in a complex system, through incorrect analysis of a situation, through conflict between short- and long-term goals, or through conflict between values.6 More recent complexity science concepts would similarly predict that the nature of complex systems, such as practices or communities, make it difficult to predict the results of interventions on the group.
Unintended consequences have long been recognized in public health (eg, the relationship between building the Aswan High Dam and subsequent increased regional human schistosomiasis7). Recent publications in the primary care literature have identified possible examples of unintended consequences of well-planned and well-intentioned interventions. A cluster randomized trial of an intervention in practices aimed at reducing problem alcohol use showed lower rates of reduction in intervention than in control practices, the opposite of what was expected.8 Another complex study examining a variety of interventions with several outcomes, including increased exercise, showed significantly reduced exercise levels in an intervention network.9 It is important to recall, however, that in none of these studies is a cause-and-effect relationship established between the intervention and the unexpected outcome and that other factors, such as random variation, unrelated changes occurring in the groups, etc, could explain the findings.
Regardless of the true cause of the these observed effects on groups, an argument can be made that we should monitor, report, and seek to explain these effects in the same manner as we do the effects of research activities on individuals. Comparable monitoring has not been the standard with group-based research, such as cluster randomized trials. Nevertheless, when a group is adversely affected by a study intervention, it suggests that individual members of the group may experience adverse effects as well.
Weijer et al, in a series of articles on the ethics of cluster randomized trials, make an argument that group-based trials should have data-monitoring boards, in the same way that individual-based interventions do, to assure the maintenance of clinical equipoise throughout the study and fulfillment of the Belmont principle of beneficiance.10–12 Although the ethics of cluster randomized trials are only now beginning to be considered, further work is needed to understand the potential role of data-monitoring boards in group-based intervention research.
We wholeheartedly commend Shaw et al for their innovative reporting of important variation and unexpected outcomes, together with their use of a multi-method approach to report the context of practices in which variations occurred. Even though this added information does not definitively explain the cause of important variations, its description is an important step toward future understandings of those causes. Furthermore, it reflects important respectfulness of primary care researchers to the impacts of that research on people whether as individuals or as members of a group.
The editorial team of Annals considers this innovation to be important enough that we encourage others using group-based research designs to consider adopting similar practices, examining and reporting important variations within groups—both greater or less than expected—and using multimethod designs to seek and report explanation for those variations. We use the term important to describe variations that are of sufficient degree or impact to warrant further examination. We also encourage methodological innovation to help define what constitutes important variations and how to distinguish them from random variations. Such designs and protections of group members are an important next step in solving some of the complexities of primary care.
Acknowledgments
The following members of the Annals editorial team participated in developing this editorial: Kurt Stange, MD, PhD; William R. Phillips, MD, MPH; Louise S. Acheson, MD; Bijal Balasubramanian, MBBS, PhD; Elizabeth A. Bayliss, MD, MSPH; Robert L. Ferrer, MD, MPH; James M. Gill, MD, MPH.
- Received for publication March 23, 2013.
- Revision received March 23, 2013.
- Accepted for publication March 24, 2013.
- © 2013 Annals of Family Medicine, Inc.