Abstract
BACKGROUND Most expert groups recommend shared decision making for prostate cancer screening. Most primary care physicians, however, routinely order a prostate-specific antigen (PSA) test with little or no discussion about whether they believe the potential benefits justify the risk of harm. We sought to assess whether educating primary care physicians and activating their patients to ask about prostate cancer screening had a synergistic effect on shared decision making, rates and types of discussions about prostate cancer screening, and the physician’s final recommendations.
METHODS Our study was a cluster randomized controlled trial among primary care physicians and their patients, comparing usual education (control), with physician education alone (MD-Ed), and with physician education and patient activation (MD-Ed+A). Participants included 120 physicians in 5 group practices, and 712 male patients aged 50 to 75 years. The interventions comprised a Web-based educational program for all intervention physicians and MD-Ed+A patients compared with usual education (brochures from the Centers for Disease Control and Prevention). The primary outcome measure was patients’ reported postvisit shared decision making regarding prostate cancer screening; secondary measures included unannounced standardized patients’ reported shared decision making and the physician’s recommendation for prostate cancer screening.
RESULTS Patients’ ratings of shared decision making were moderate and did not differ between groups. MD-Ed+A patients reported that physicians had higher prostate cancer screening discussion rates (MD-Ed+A = 65%, MD-Ed = 41%, control=38%; P <.01). Standardized patients reported that physicians seeing MD-Ed+A patients were more neutral during prostate cancer screening recommendations (MD-Ed+A=50%, MD-Ed=33%, control=15%; P <.05). Of the male patients, 80% had had previous PSA tests.
CONCLUSIONS Although activating physicians and patients did not lead to significant changes in all aspects of physician attitudes and behaviors that we studied, interventions that involved physicians did have a large effect on their attitudes toward screening and in the discussions they had with patients, including their being more likely than control physicians to engage in prostate cancer screening discussions and more likely to be neutral in their final recommendations.
- prostate
- decision making
- shared
- patient-physician relationship
- doctor-patient communication
- standardized patient
- medical uncertainty
- patient activation
- patient-centered care
- randomized controlled trial
INTRODUCTION
Two large randomized controlled trials (RCTs)1,2 yielded slightly conflicting results when screening average-risk men for prostate cancer with a prostate-specific antigen (PSA) test; one showed no benefit and the other showed only a very small decrease in mortality, and neither directly measured harms associated with prostate cancer screening. Most expert groups recommend shared decision making for prostate cancer screening.3 In shared decision making, the partnership between doctor and patient facilitates the latter’s understanding of pertinent medical information so as to enable him to weigh his values and preferences regarding various options and to engage in active decision making to the extent he feels comfortable.4
Although physicians feel that shared decision making is appropriate regarding prostate cancer screening,5,6 most primary care physicians routinely order a PSA test with little or no discussion about the patient’s belief as to whether the potential benefits justify the risk of harm.7–9 Insufficient understanding of epidemiological concepts or of the specific trade-offs associated with PSA screening, legal fears of deviating from standard practice, lack of time, difficulty eliciting an individual patient’s values and preferences, and a perceived inability to adequately inform patients about complex decisions are barriers to shared decision making for the decision to screen for prostate cancer.10,11
We developed interactive online tools intended to facilitate shared decision making, designed for both patients and clinicians, that would address some of these issues. Numerous previous decision aids for prostate cancer screening for patients have been reviewed,12–14 but there are few RCTs of educational interventions for both clinicians and patients.15 In the current RCT we tested whether educating primary care patients and activating their patients to discuss prostate cancer screening, in comparison with both usual practice and physician education alone, had a synergistic effect on (1) perceived shared decision making, (2) rates of discussions about prostate cancer screening, and (3) final physician recommendations for prostate cancer screening made to standardized patients (trained actors presenting as real patients).
METHODS
Design Overview
We performed a multicenter cluster randomized controlled trial to assess the effect of an interactive Web-based educational program and activated patients on physician counseling about prostate cancer screening during outpatient primary care visits.
Setting and Participants
We recruited at 5 California health systems: 2 large primary care networks associated with an academic medical center, 2 staff model health maintenance organizations, and a medical group practice network. Local physician champions at each site recruited physicians in internal and family medicine during medical staff meetings and by telephone follow-up. Physicians consented to participate in educational activities and to help recruit patients by identifying all male patients aged 55 to 65 years who lacked serious comorbidity (including any known cancer) and spoke English. Study physicians invited eligible patients (whose names were not given to the research team) to participate in a “Men’s Health Decisions Study,” without reference to prostate cancer screening. Interested patients contacted the study coordinator (by telephone, e-mail, or returning a preaddressed postcard), after which those who wished to participate were given a consent form.
Physicians received a modest incentive ($200 for 3 hours total time), and clinics were compensated for lost revenue resulting from the 20-minute standardized patient visit. Patients received $10 compensation. We obtained institutional review board approval from all participating sites.
Randomization and Interventions
Randomization of waiting areas (cluster level) was stratified by health system and used a permuted blocks design. We chose a cluster randomized design because we assumed that physicians who share a common waiting area (3 to 8 physicians) would interact with each other, as might their patients, creating potential contamination. One author (D.J.T.) generated the random allocation sequences and concealed it throughout the study. A research assistant enrolled consenting physicians associated with a waiting area and notified the author when enrollment was complete, whereupon that waiting area was allocated according to the next entry in the allocation sequence. The study had 3 arms: usual practice (control) and 2 intervention arms (Figure 1). Physicians in both intervention arms participated in an interactive Web-based educational program. In one intervention arm physicians saw only the educational program (MD-Ed). The other intervention also including activated patients (MD-Ed+A), who viewed a different, but related, program that both provided information and encouraged them to participate actively in the decision to pursue prostate cancer screening. Within each health system, all consenting physicians that shared a common patient waiting area were randomly allocated to the same study arm.
With regard to blinding, patients and standardized patients were not aware of the multiple study arms or the arm to which their physician was assigned. The standardized patients were told that they were assessing standard differences in physician communication styles. Physicians were unaware of the identity of the standardized patients when their visit would occur.
Brochures on prostate cancer screening from the Centers for Disease Control and Prevention (the only materials provided for control patients) were available in the waiting areas of all enrolled practices. We developed 2, 30-minute interactive educational Web-based programs on prostate cancer screening, one for physicians and another for patients. Each program reviews the importance of prostate cancer in men’s health, limitations of PSA screening for prostate cancer, the risk trade-off inherent to the decision to do prostate cancer screening, and the central importance of each individual’s values and preferences. Both programs highlight visual risk comparison diagrams16,17 that used published cancer prevalence and outcome data.18–21 The physician program allows a user to adjust any of the underlying model assumptions and instantly view how that affects a given patient’s 10-year risk. We also sent laminated screen shots of essential diagrams to physicians in both intervention arms for use while counseling patients about likelihoods of harm and benefit around prostate cancer screening. The patient program includes video vignettes to depict the potential harms for 2 scenarios: (1) not having prostate cancer screening (a regretful patient dying of advanced prostate cancer), and (2) having prostate cancer screening with a false-positive result (a regretful patient with impotence from an ostensibly nontherapeutic prostatectomy). All physicians and patients in a particular waiting area (cluster) received the same intervention (control, MD-Ed, or MD-Ed+A).
Outcomes and Follow-up
Our primary outcome measure was patients’ perception of shared decision making, measured by summing 4, 4-point scales. We derived these from Kaplan’s validated shared decision-making instrument,22,23 modified for each participant type (patient, standardized patient, and physician) to be specific for prostate cancer screening. Immediately after their clinic visit actual patients were mailed a questionnaire (available from the authors upon request) that assessed whether prostate cancer screening was discussed, attitudes and concerns around the screening, prior screening with PSA tests, and experience with prostate cancer, as well as encounter details, visit satisfaction, and global health status. MD-Ed+A patients also agreed to arrive an hour early for their next appointment and complete a 30-minute computer-based educational program. A research associate met all MD-Ed+A patients to help them use the educational program on a laptop computer but did not discuss program content. Physicians were not aware which patients were involved in the study or who completed the educational program, and none reported detecting these patients.
Patient ratings of physicians are subject to a number of limitations, including patient selection of the physician, length of relationship, patient’s clinical problems, and difficulty separating global ratings from specific domain-specific ratings. Standardized patients—persons trained to portray a specific patient case in a standardized fashion—represent a different (and in some ways perhaps more objective) means for assessment of physician communication. Standardized patients can be trained to rate physician-patient communications skills and behaviors reliably and are therefore used by medical schools and assessment teams, such as the US Medical Licensure Examination. Study physicians consented to an unannounced, audio-recorded standardized patient encounter at some point in the 12 months after the start of the study. Eight actors underwent 20 hours of training to portray a pleasant 62-year-old man, without serious comorbidities, who was new to the practice and had a friend who recently had prostate cancer diagnosed. To lend authenticity, the standardized patients’ complaint was a scripted distracter condition (weekend warrior shin splints). These patient-actors portrayed similar affect, curiosity, pace, and educational background, and they gave standardized responses to expected questions.
The standardized patients prompted the physician up to 2 times to discuss prostate cancer screening, and if the discussion occurred, asked toward the end of the visit: “What would you do if you were me?” Immediately after each visit, the standardized patients recorded secondary outcome measures on a postvisit form (available from the authors upon request) including the physician’s response to discussion prompts (no response/deferred discussion, lectured, or engaged in shared decision making about prostate cancer screening), engagement in specific shared decision-making behaviors, and the physician’s final recommendation about prostate cancer screening (in favor, against, neutral, or could not determine). All encounters between the standardized patient and physician were audio-recorded digitally and transcribed by a commercial transcription service.
All study physicians completed an online preparticipation survey questionnaire (available from the authors upon request), which included perception of shared decision making (adapted Kaplan instrument),22 prior personal and professional experience with prostate cancer, and attitudes and preferences about prostate cancer screening. An online postparticipation questionnaire reassessed the physicians’ perception of shared decision making, attitudes and behavior regarding PSA screening, and (for intervention physicians) a rating of our educational program. All physicians also received periodic questionnaires asking whether they had detected a standardized patient in the prior month.
Statistical Analysis
Based on preliminary data, we estimated that it would be important to detect mean differences of 0.5 standard deviations on the summed shared decision making scales for patients (scores ranging from 4 to 16) in pairwise comparisons of each intervention group with the control group, and that intracluster correlations (at the physician level and waiting area level of 0.30 and 0.03, respectively) could result in a variance inflation factor as high as 3.5 to account for the randomization of waiting areas.24,25 Accordingly, we aimed for a target sample size of 576 patients from approximately 120 physicians and 30 waiting areas to achieve 80% power for detecting important pairwise differences using a 2-sided α of .05. The collection of patient postvisit assessments was delayed for up to 3 months to allow physicians in both interventions sufficient time to view the eDoc intervention and to allow 7 to 10 patients per MD-Ed-A physician to become activated. In the absence of any previous information to guide the choice of how many activated patients to use, we chose this target of 7 to 10 as one that seemed possibly achievable within the constraints of our study, but also large enough, conceptually, to have a measurable impact. This recruitment resulted in substantial imbalances among the study arms in the number of patient postvisit assessments collected.
A multilevel modeling approach was used for all statistical analyses that enabled simultaneous estimation of intervention effects while allowing for appropriate control of explainable design effects resulting from the stratified cluster randomized design. Generalized linear mixed models were used to control for the hierarchical structure of our data and to adjust for within-clinic and within-practice residual correlation of the nested outcomes. Adjusted between-arm differences in means and proportions were estimated using a linear link function for simplicity of interpretation. Because some physicians did not start the Web-based educational program, they and their patients were allocated to the control group for an as-treated analysis. Statistical analysis was conducted with SAS 9.2 (SAS Institute).
RESULTS
Participant Flow
Physicians were recruited between May 2007 and December 2008. Figure 2 summarizes enrollment and participant allocation into the 3 study arms. The intention-to-treat analysis included all 120 physicians and 712 of their patients. Eight physicians in MD-Ed and 7 physicians in MD-Ed+A arms did not did not start the Web-based educational program but were included in the intention-to-treat analysis. The timing of the standardized patient visit (which varied by study arm) determined the length of follow-up: standardized patient visits occurred about 6 weeks after the intake survey for control physicians, between 6 to 10 weeks for MD-Ed physicians, and between 6 to 16 weeks for MD-Ed+A physicians (available from the authors on request). Although we attempted to recruit and activate between 7 to 10 patients per MD-Ed+A physician, clinicians ultimately interacted with an average of only 2 to 3 activated patients (range 1 to 8), and 3, by mistake, saw the standardized patient before seeing any activated patients.
Participant Characteristics
Patient characteristics (assessed postvisit, Table 1) were similar among arms. Baseline physician characteristics (Table 2) were also similar. Eighty percent of patients (and 75% of male physicians) had previously undergone prostate cancer screening with a PSA test, most within the past 2 years. Patients expressed a strong preference to be directly involved in making important decisions about their health (score of 6.4 of 7, in which 7 indicates strongly agree), and were more concerned about developing urological morbidity (score of 5.6 of 7) than they were about having prostate cancer (score of 4.3 of 7).
Patient Visits
Patients reported a moderate to high level of shared decision making on the modified Kaplan scale (Table 3), which did not differ among study arms (MD-Ed vs control, adjusted mean difference = −0.29, 95% CI, −1.30 to 0.71; P ≥.05, not significant (NS); MD-Ed+A vs control adjusted mean difference = 0.87, 95% CI, −0.17 to 1.90, P= NS). Residual intracluster correlations at the physician-level and waiting-area level were estimated via restricted maximum likelihood to be 0.01 and 0.04, respectively, for the PSA-specific summed shared decision-making scale. Patients of MD-Ed+A physicians were substantially more likely to report having discussed prostate cancer screening (Table 3) during clinic visits (65%) than the patients of control physicians (38%, adjusted difference in proportions = 0.27; 95% CI, 0.14 to 0.40; P <.05). Compared with patients of control physicians, patient of MD-Ed+A physicians reported that they would be less worried by the knowledge that they had prostate cancer cells growing in their body (adjusted mean difference = −0.70; 95% CI, −1.10 to −0.30; P <.05) and would be less bothered by difficulty controlling their urine (adjusted mean difference = −0.49; 95% CI, −0.83 to −0.14, P <.05). Patient satisfaction was high in each group, and patients reported having a PSA test with similar frequency among groups (32% overall ordering rate).
Standardized Patient Visits
Fifteen of 101 physicians (15%) thought they detected a standardized patient, and in 14 cases they were accurate, typically because their practice was closed or because staff could not verify the standardized patient’s insurance). Although all physicians discussed prostate cancer screening after prompting, 9% responded minimally or rescheduled the discussion for a later visit. Most physicians (64%) lectured the standardized patient about prostate cancer screening, rather than engaging in a 2-way discussion (28%). In response to the question “What would you do if you were me?” 80% of the control physicians recommended PSA testing, compared with 59% of MD-Ed (P= NS) and 44% of MD-Ed-A physicians (P=NS). One-half of MD-Ed+A physicians were neutral in their recommendation about whether the standardized patient should obtain a PSA blood test, in comparison with 33% of MD-Ed physicians and 15% of control physicians (adjusted difference in proportions = 0.32, 95% CI, 0.10 – 0.54; P <.05; Table 4).
Physician Self-Report
Among physicians, there were no statistically significant differences between the control group and either of the 2 intervention groups with respect to mean pre- to postintervention changes in overall shared decision making. Physicians in the MD-Ed+A group reported greater changes in depth of discussions regarding the risks and benefits of PSA screening (adjusted mean difference = 0.46, 95% CI, 0.15 – 0.78; P <.05; Table 3).
As-Treated Analysis
Fifteen of the 77 physicians allocated to an intervention arm (19%) did not complete the educational intervention. The results from this analysis were similar to the intention-to-treat analysis.
DISCUSSION
Our educational intervention was aimed at improving communication (behaviors and attitudes) between patients and physicians around risk and uncertainty for prostate cancer screening after a brief 20- to 30-minute Web-based educational intervention. We found no differences in standardized patient–reported indices of shared decision making. We found, however, large differences in the proportion of physicians who discussed prostate cancer screening when interacting with activated patients. Even more striking, we found sustained differences in the attitudes of physicians when discussing prostate cancer screening—3 months after participating in the educational intervention—with a major movement from a pro screening bias toward neutral counseling about prostate cancer screening.
This substantial increase in neutrality associated with patient activation is remarkable for several reasons. Most strikingly, this response occurred even though the exposure to activated patients was minimal: the total number of activated patients was small (about 6% of all eligible patients), and these few visits with an activated patient occurred up to 4 months before the standardized patient’s assessment. Second, patient activation occurred before any routinely scheduled physician visit and not necessarily one in which prostate cancer screening was likely to be discussed. Two-thirds of patients who viewed the educational intervention subsequently discussed prostate cancer screening with their primary care physician during that visit, although we do not know the extent of that discussion or the attention to shared decision making. Also, these findings occurred within a cohort of study patients almost all of whom had previously chosen to undergo prostate cancer screening with a PSA test. Although we had study patients interact with the educational intervention in their physicians’ office to assure that the intervention was actually done, there is no reason why use of this tool would have to be limited to that environment, as it was designed as a Web-based intervention and could easily be completed at home. Finally, this study was done at a time that study physicians were substantially predisposed to ordering PSA testing, which we believe makes the measured impact of our brief intervention even more impressive.
Changing physician practice behavior through conventional continuing medical education and publication of guidelines has been largely met with failure.26 Our study physicians’ knowledge about prostate cancer screening was generally high at baseline and did not change after 3 months, which may be explained by Bell et al’s finding that knowledge tends to drop to precurricular levels at 2 months without reinforcement.27 An educational intervention just for physicians may not be sufficient to optimize shared decision making, but it seems to be a necessary part of any successful strategy, which requires activation of both arms of the patient-clinician dyad.15,28
We are intrigued by our finding that activation of only a very few patients appeared to provide the necessary spark to ignite meaningful shared decision making. Physicians do value shared decision making in concept,5,6 and physicians’ behavior is highly sensitive to their perception of patients’ expectations.29,30 Just as physicians tend to overprescribe antibiotics when they believe that many patients want them, perhaps encountering even a few activated patients induced the perception that men in general want to participate in the decision to do prostate cancer screening and facilitated both more engagement in shared decision making and less willingness to tell the patient what to do in response to the question: “What would you do if you were me?” We suspect that the effect would likely be substantially greater after exposure to a more substantial number of activated patients, and that such exposure could also lead to considerable increases in shared decision making for a wide variety of clinical problems.
This study has important limitations. By design, the timing of patient postvisit assessments was different across the 3 study arms, and the analysis of some patient postvisit assessment outcomes was restricted to just patients who discussed PSA tests, so assessments were not strictly comparable across the 3 study arms. In addition, this study’s patient population tended to be well educated and affluent; although our patient education intervention was written for a 9th-grade comprehension level, these results may not generalize to other patient populations. The finding that shared decision making as subjectively perceived by patients did not vary among study arms is inconsistent with the improved physician behavior changes in the intervention arms, as objectively recorded by standardized patients. This finding is not entirely unexpected, however; many studies have shown that patients are generous in grading their satisfaction regarding interactions with their physician, and our metric may simply not have been sensitive enough to identify small difference within this context. In addition, we were unable to gauge the impact of a more substantial group of activated patients, as originally planned, because of difficulty arranging for such patients to arrive early enough before a scheduled visit to meet with a research associate and view the educational intervention on a laptop computer. A newer touch-screen tablet version of our patient intervention may be sufficiently self-explanatory and easy to use that implementation will not require meeting at an appointed time with research staff.
Our inability to obtain rates of PSA testing before and after our intervention resulted from disparate medical records systems at different study sites and forced us to rely on subject reports, which are influenced by recall bias and preferences. In addition, the timing of the evaluation (standardized patient visit and questionnaires) was, by design, delayed more for the MD-Ed+A group than for other groups to ensure that they had actually interacted with at least a certain number of activated patients by the time this evaluation was done. Although a delayed evaluation may have introduced some bias into our results, we believe it would have biased against the activated group, because a longer delay would be expected to diminish the impact of the educational intervention; this potential bias should, if anything, strengthen our findings. Finally, using unannounced standardized patients in evaluating practice patterns consumes time and resources and is often challenging to arrange—especially in closed practices. This evaluation technique, however, has the advantage of examining physician responses to standardized stimuli as they occur in routine practice with trained observers, with a level of detail and accuracy about practice behaviors not possible with patient surveys, physician self-report, or chart review.
Although this study failed to find an impact on our primary outcome of patient-perceived shared decision making, it does suggest an apparent dose effect up to 4 months after a brief Web-based educational intervention for physicians, and a companion intervention for patients, with regard to increasing the neutrality of physicians’ stated recommendations about prostate cancer screening. Coupled physician and patient education has the potential to improve appropriate utilization of medical services, especially in areas of medical uncertainty.
Acknowledgments
We would like to thank the following individuals for their outstanding contributions to our study: program staff Christine Harlan; research assistants Stacy Hayashi, Timothy Beer (University of California, Davis; UCD), Richard Maranon, and Jerilyn Higa (University of California, Los Angeles; UCLA); faculty site coordinators Steven Kelly-Reif (Kaiser Sacramento), Daniel Keatinge (Kaiser Southern California), Marion Leff (Sutter Sacramento), Phil Raimondi, and Debra Gage (UC Davis); technology development Paul Drummond, Dan Plummer, Andy Fanning (University of Newcastle-upon-Tyne, UK); standardized patients Richard Spencer, Dan Harlan, Steve Savage, Mike Kerrigan, and Patrick Murphy (Sacramento); John Livingstone, Henry Selvitelle, and Frank Elliott (Los Angeles).
Footnotes
-
Conflicts of interest: authors report none.
-
Disclaimer: The CDC was not involved in this study’s design, data collection, analysis, interpretation, or decision to approve this manuscript for publication.
-
Funding support: This study was funded by grant 1 RO1 PH000019-01 by the Centers for Disease Control and Prevention (CDC).
- Received for publication April 24, 2012.
- Revision received January 7, 2013.
- Accepted for publication February 4, 2013.
- © 2013 Annals of Family Medicine, Inc.