Abstract
PURPOSE We wanted to assess the effectiveness of intensive education for physicians compared with a traditional session on communicating with breast cancer patients.
METHODS A randomized controlled trial was conducted in practices in London, Hamilton, and Toronto, Canada, with 17 family physicians, 16 surgeons, and 18 oncologists, and with 102 patients of the surgeons and oncologists. Doctors were randomized to 1 of 2 continuing education approaches: a traditional 2-hour version (control group), or a new 6-hour intensive version including exploring the patients’ perspectives and reviewing videotapes and receiving feedback (intervention group). Communication behavior of the physicians was measured objectively both before and after the intervention. As well, 4 postintervention patient outcomes were measured, by design only for surgeons and oncologists: patient-centerdness of the visit, satisfaction, psychological distress, and feeling better.
RESULTS No significant differences were found on the communication score of the intervention vs the control physicians when controlling for preintervention communication scores. Intervention family physicians, however, had significantly higher communication subscores than control family physicians. Also, patients of the intervention surgeons and oncologists were significantly more satisfied (scores of 82.06 vs 77.78, P = .03) and felt better (88.2% vs 70.6%, P=.02) than patients of the control surgeons and oncologists when controlling for covariates and adjusting for clustering within doctor.
CONCLUSIONS The continuing medical education intervention was effective in terms of some but not all physician and patient outcomes.
INTRODUCTION
Patient-physician communication is of intense interest in our consumer age because major problems have been documented1,2 and unfavorable outcomes have been implicated, eg, patient dissatisfaction,3 lack of patient adherence,4 poorer self-reported health,5 physician satisfaction,6 and malpractice claims.7 Whereas programs to improve communication are common in undergraduate medical programs and residency programs,8–10 continuing medical education programs are less common.11,12 Notable exceptions include studies that evaluate communication in terms of improved outcomes. Although research designs have evolved from non-randomized studies with self-report outcomes13,14 to randomized trials with communication behaviors measured objectively7,15 and with patient outcome measures,16 only slightly more than one-half of the studies were able to show an impact on patient health outcomes.5,17 The education programs have varied in length (from 4 hours to 3 days). Few studies incorporated varied teaching approaches (despite the finding that multiple interventions are more effective),18 and few were based on conceptual frameworks of good communication or on the expressed needs of patients.
We therefore designed a new continuing medical education (CME) program of feasible length (6 hours), using multiple approaches and based on expressed needs of patients and a recognized conceptual framework.19 We tested the hypothesis that the new CME would change verbal communication of surgeons, oncologists, and family physicians, and that it would also influence breast cancer patients’ perceptions of both the patient-physician interaction and their own health. We conducted a randomized controlled trial of 2 CME approaches: (1) a traditional 2-hour CME showing a videotaped consultation, which was then discussed; and (2) a new state-of-the-art 6-hour CME including the above plus 2 new elements: a discussion of the patients’ perspectives, and a videotape review with individual feedback.
METHODS
Participants
This study was approved by the Human Subjects Review Committee of The University of Western Ontario. We recruited 51 interested family physicians (n = 17), general surgeons (n = 16), and oncologists (n = 18) in Southern Ontario, Canada. Recruitment occurred through letters of information and personal telephone contact by the respective family physician, surgery, and oncology coinvestigators and was guided by the approach outlined in Borgiel et al.20 The 51 physicians were randomized to 1 of the 2 CME approaches, with each physician providing outcome data on communication. By design, only breast cancer patients of surgeons and oncologists provided patient-related outcome data, because family physicians and surgical residents care for so few eligible breast cancer patients at any time. Eligible patients were older than 18 years and within 1 year of the diagnosis of breast cancer or within 1 year of the diagnosis of a recurrence of breast cancer. Patients were excluded if they were too ill or disabled to answer the questions at the entry interview, unable to understand simple English instructions, or cognitively impaired in the opinion of the physician. Eligible patients (10 per doctor) were asked at the time of a visit to their surgeon or oncologist to participate, and consent was obtained to fill out a questionnaire immediately after the visit and mail it back.
Interventions
State-of-the-Art CME
The state-of-the-art CME program was developed on the basis of the qualitative findings from our previous study,21 our conceptual framework for patient-centered communication,22 the communication and CME literature, and the expertise of an educator (W.W.). The program incorporated the principles of adult education23,24 and experiential learning25–28 and contained 5 key elements: (1) literature—a description of the benefits of improved patient-physician communication for both patients and doctors; (2) physicians’ perspectives—participating physicians ventilated about barriers to and shared solutions for effective communication; (3) patients’ perspectives—first, a videotape of the findings of the qualitative study of breast cancer patients’ issues regarding communication, and second, breast cancer survivors in person talking about their own concerns; (4) video demonstration—a scripted “not-so-good” and “better” interaction between a breast cancer patient/actress and physician; and (5) practice with standardized patients and videotape review with feedback. The CME program was developed during an 18-month period that included formal pretests with evaluation29,30 and was facilitated by a communication educator and clinician. The CME program is outlined in Table 1⇓.
Traditional CME
The control group experienced a conventional CME session on communicating with breast cancer patients, which included a 2-hour small-group discussion triggered by a videotaped encounter between a physician and a breast cancer standardized patient.
Objectives
We hypothesized that, compared with the control group, the group of physicians attending the 6-hour intervention CME session would receive higher scores on an objective communication measure controlling for preprogram communication scores. We also hypothesized that breast cancer patients of the oncologists and surgeons would have higher scores on perceptions of patient-centered communication, be more satisfied with the physician’s information-giving and interpersonal skills, experience less psychological distress, and feel better after the visit with the doctor, after controlling for confounding variables and adjusting for clustering effects within doctors.
Outcomes
The objective Patient-Centred Communication Measure31 was adapted for visits regarding breast cancer. The original measure was used to code and score recorded verbal communication, was reliable (interrater agreements of 74% to 94% and intrarater correlation of 0.73) and valid (correlation with a global score, 0.85),31 and it had been used in 2 previous studies.32,33 The original measure was adapted by creating 2 subscores: 1 subscore on validation of the patients’ expressed experiences, and 1 subscore on explicit support expressed by the physician. Subscores were further regrouped into the 4 major themes identified in our qualitative study (building relationships, sharing information, creating an experience of control, and mastering the whole person experience).21 The total score and each subscore ranged from 0 to 100.
Also, 4 patient outcomes were collected through questionnaires: (1) patient perceptions of patient-centeredness were assessed by a valid 12-item questionnaire based on Henbest et al33,34; (2) the patients’ satisfaction with doctors’ information-giving and interpersonal skills was assessed by the valid and reliable 18-item Cancer Diagnostic Interview Scale (CDIS)35; (3) patients’ psychological distress was assessed by the 51-item Brief Symptom Inventory, which addresses 3 dimensions particularly relevant for breast cancer patients (anxiety, depression, and hostility) and correlates highly with the benchmark SCL-90 (Symptom Checklist)36; and (4) whether patients felt better after a visit to the doctor was assessed by a single validated item.37
Data Collection
Patient-Centered Communication Scores
Before the CME session, data were collected in the physicians’ offices by recording visits with 2 announced standardized patients and scored, resulting in 1 average pre-CME score per physician. After the CME session, data scores from the audiotapes of visits with 2 more announced standardized patients were averaged to create each physician’s post-CME communication score.
Four different case scenarios for standardized patients were developed for use in the pre-CME and post-CME visits. Each physician saw all 4 cases, which were randomly ordered for each physician so that there was no before-after bias in the level of difficulty. Appointments were arranged through the physician’s office staff during regular patient hours; a brief case history was provided, including mock biopsy, sonogram, and mammogram reports specific to each case scenario and designed to create an aura of authenticity.
Two well-trained raters coded and scored the recorded visits. The timing of the audiotape (pre-CME or post-CME) and the group allocation of the physician were concealed from the raters.
Patient Outcomes
Eligible, consecutive real patients completed questionnaires after their visit with their surgeon or oncologist and mailed them back within 1 month of the intervention.
Sample Size
To detect a clinically significant difference of 10 points (with standard deviations at 10.1) on the objective communication score with 80% power and α = .05 (2-sided), 32 doctors were the minimum required.38 To estimate the number of patients needed for the 3 continuous patient outcomes, standardized effect sizes of 0.6 were deemed adequate. Fifty-one patients per group were required to permit analysis adjusting for clustering of patients within doctor.
Randomization
Randomization was done by the project coordinator. Physicians were recruited in blocks by specialty category and city. After the whole block of physicians had been recruited, the physicians were allocated using a random number table. Although the doctors and the teachers of the CME could not be masked, the audiotape coder, the standardized patients, and the real patients were masked to the doctors’ allocation.
Statistical Methods
We used ANCOVA to test for differences between the 2 groups on the objective communication measure controlling for the corresponding baseline objective communication score; the unit of analysis was the doctor. Mixed model linear regression was used to test for differences between the patients of the 2 groups of doctors on the 3 continuous patient outcome variables. The clustering of patients within doctor was adjusted for using SAS “procedure mixed.” As well, to increase precision, 2 covariates were selected for adjustment on the basis of their potential to affect the outcome and the clinical significance of their differences between the intervention and control group: patient education level (dichotomized at secondary school completion) and medical conditions (expressed as mean number). Mixed model logistic regression was used for the one dichotomous patient outcome (feeling better), adjusting for clustering and the identical 2 covariates, using generalized estimating equations.39
RESULTS
Fifty-one physicians and 102 patients participated in the study. Figure 1⇓ shows the flow of participants: (1) family physicians, oncologists, and surgeons who were approached, entered into the trial, randomized, and completed the trial (approximately 40% of doctors approached agreed to participate, and all 51 who agreed completed both the intervention and the doctor measures); (2) patients of oncologists and surgeons who were approached and completed the questionnaire. By design, patient data were not sought for family physicians and surgical residents. The 23 surgeons and oncologists who collected patient data achieved a 44.3% patient response rate (as a result of a combination of doctors’ failure to distribute and patients’ failure to mail back the questionnaire). Responding patients were more likely from oncologists 48.7% than surgeons 36.3%, but they were almost evenly split between intervention (46.4%) and control (42.5%) group.
Baseline characteristics of the physicians are shown in Table 2⇓. There were no substantive or significant differences between intervention and control physicians. Patient characteristics were similar in the 2 groups with respect to marital status (52% married vs 48%), mean age (58.4 vs 59.5 years), mean scores on preference for information (7.6 vs 6.9), and involvement in decisions (2.7 vs 2.6). Differences were observed with respect to education (intervention group 54% with high school or less vs 46% in the control group) and mean number of medical conditions (1.1 vs 1.4).
The postintervention objective communication scores of physicians did not differ significantly overall between the intervention group and the control group (means were 72.21 (CI = 66.14–78.29) and 70.91 (CI = 63.71–78.11), respectively, P = .38) after controlling for baseline objective communication scores. Further exploratory analyses showed that none of the 7 subscores on the objective communication measure were different between the intervention group and control group for oncologists and surgeons; however, 4 of the 7 were significantly higher for the intervention family physicians than for the control group family physicians (Table 3⇓): (1) relationship building (offering support, 77.8% vs 22.2%); (2) information sharing (the physicians describing and the patients responding, score of 86.3 vs 66.3); (3) exploring whole person issues (score of 82.8 vs 58.7) and; (4) validating whole person issues (score of 77.0 vs 49.1). Further inspection of the data showed that a floor-effect could not be responsible for these differences in the family physicians relative to the other medical disciplines.
Of the 4 hypotheses regarding outcome variables measured on patients of surgeons and oncologists (Table 4⇓), 2 were supported (patient satisfaction and the patients feeling better). Table 5⇓ shows detail of the statistics for 1 of the 4 patient outcomes, patient satisfaction.
DISCUSSION
The trial found that the state-of-the-art CME did not improve overall objective communication scores but was related to patient satisfaction and feeling better. Also, it did improve family physicians’ objective communication subscores but not those of the surgeons and oncologists. These seemingly contradictory results lead one to speculate about a number of alternative explanations. One explanation may be the relative appropriateness of the different parts of the CME program for the different types of doctors. Perhaps the family doctors were more accepting of and responsive to the CME program because of the greater frequency of videotape-and-feedback training in residencies in family medicine as compared with residencies in surgery and oncology. Also, perhaps the part of the program that introduced the patients’ perspective (where real patients told their story, expressed their feelings, and explained their issues) was effective in raising the oncologists’ and surgeons’ consciousness, thereby altering their visits with patients in ways that patients noticed.
If the latter is the correct interpretation, it is worth describing more fully the part of the CME program that addressed the patients’ perspective. The doctors were prepared for the patients’ perspective by first being invited to express their own perspective, including perceived barriers and facilitators in communicating with breast cancer patients. Next, when the physicians were ready to turn to the patients’ perspective, they viewed a video of breast cancer survivors explaining the findings of our formal qualitative study. Finally, 2 breast cancer survivors came into the seminar room and told their stories briefly and answered questions. The reality of the patients’ palpable anxiety and fear was inescapable.
There is a second explanation for why the state-of-the-art CME improved family physicians’ communication scores but not surgeons’ and oncologists’ scores. Although the family physicians did not work in the same practice, the surgeons, surgical residents and oncologists did, thereby opening the door to possible contamination, mitigating against finding differences between the state-of-the art CME and the traditional CME group.
A third explanation for the contradictory finding that intervention group surgeons and oncologists did not change their behavior (on the objective communication measure) but their patients reported higher satisfaction and felt better is that the objective communication measure missed some crucial component of what was taught and learned. For example, this measure does not take into account nonverbal communication. A previous study by the authors suggested that patient perceptions, not the objective measure, correlated with patient health outcomes (including recovery from symptoms and SF-36 [Short Form] self-rated health), implying that patients discerned important dimensions of communication not captured by the objective measure33 As well, the objective measure was developed in family medicine and, although it was adapted for this study of breast cancer patients, it might not be sensitive enough to behaviors of surgeons and oncologists.
A strength of the current study is that the objective communication measure was obtained both before and after the intervention. Communication measurement raises other issues. First, whereas audiotape studies of real patients typically use 10 patients per doctor, most studies of standardized patients analyze 1 patient per doctor,40,41 claiming that standardized patients reduce variability (of patient problem and doctor behavior). We attempted to improve reliability by using 2 patients, as did Epstein et al.42 Any possible misclassification will lead to a more conservative estimate. Second, if a Hawthorne effect occurred because the standardized patients were announced, it would be equal in the intervention and control groups, and thus not threaten internal validity; but it may limit the study’s generalizability to real-world patient visits. Evidence shows almost negligible Hawthorne effect, however, that is, negligible change in correlates of communication scores when doctors know they are being studied compared with when they do not: Korsch et al’s seminal study43 comparing doctors audiotaped with those not, and Epstein et al42 comparing detected and undetected standardized patients.
This study contributes to the growing body of data on the “dose-response” of communication education and indicates some impact of a shorter course than previously reported, ie, 6 hours in the current study vs 2.5 days7 and 3 days.16 Our program and these 2 longer programs included similar elements, such as physicians expressing their problems, and the video review with individual feedback. A unique element in our program was the patient perspective (videotaped findings of the qualitative study and breast cancer survivors in the seminar room).
The results of this study must be interpreted cautiously given that multiple tests were performed and some differences observed may have been due to chance. Even so, the robust magnitude of the differences somewhat weigh against this possibility. Other limitations include limited generalizability of the study (the sample of physicians was not randomly selected); fewer than one-half of the doctors approached agreed and completed the trial; and similarly, the patient sample was not representative in that only 44.3% of eligible patients completed the questionnaire. Finally generalizability of the CME itself was limited because it was co-conducted by an experienced communication educator and highly motivated clinicians.
Breast cancer patients were more satisfied and felt better after visits with surgeons and oncologists who had participated in a 6-hour CME on communication as compared with patients of control group physicians. Despite this finding, the surgeons and oncologists did not change their communication behavior as reflected by the objective measure, although the family doctors did. These data suggest that the new intensive 6-hour CME is effective but with possibly different impact among different types of doctors.
Acknowledgments
We are grateful for the help of the participating doctors from community family practice in London Ontario and area; the Department of Surgery at The University of Western Ontario, London; the London Regional Cancer Centre; the Hamilton Regional Cancer Centre; and the Toronto-Sunnybrook Regional Cancer Centre. We thank Larry Stitt of the Biostatistical Support Unit; and the Standardized Patient Program, Schulich School of Medicine and Dentistry. We gratefully acknowledge the wisdom and contribution of breast cancer survivors who served on the Advisory Committee: Louisette Smith, Brenda McKelvey-Donner, Sharron Bearfoot, Anne Buchanan, Sandy Krueger, Barb Barton-McMillan, Veronica Dryden, Margie McPhillips, Barbara Garvin, Addie Gushue, and Katherine DeCaluwe.
Footnotes
-
Conflicts of interest: none reported
-
Funding support: This study was funded by the Canadian Breast Cancer Research Initiative of the National Cancer Institute of Canada. Dr. Stewart is funded by the Dr. Brian W. Gilbert Canada Research Chair. The setting of the study was the Thames Valley Family Practice Research Unit, a health systems-linked research unit funded by the Ministry of Health and Long-Term Care of Ontario.
-
Disclaimer: The views expressed in this paper are those of the authors, and do not necessarily reflect those of the Ministry of Health and Long-Term Care of Ontario.
- Received for publication September 13, 2006.
- Revision received April 4, 2007.
- Accepted for publication April 9, 2007.
- © 2007 Annals of Family Medicine, Inc.