Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Use of the 9-item Shared Decision Making Questionnaire (SDM-Q-9 and SDM-Q-Doc) in intervention studies—A systematic review

  • Hanna Doherr,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Eva Christalle,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Levente Kriston,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Martin Härter,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Isabelle Scholl

    i.scholl@uke.de

    Affiliations Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany, The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth College, Lebanon, New Hampshire, United States of America

Abstract

Background

The Shared Decision Making Questionnaire (SDM-Q-9 and SDM-Q-Doc) is a 9-item measure of the decisional process in medical encounters from both patients’ and physicians’ perspectives. It has good acceptance, feasibility, and reliability. This systematic review aimed to 1) evaluate the use of the SDM-Q-9 and SDM-Q-Doc in intervention studies on shared decision making (SDM) in clinical settings, 2) describe how the SDM-Q-9 and SDM-Q-Doc performed regarding sensitivity to change, and 3) assess the methodological quality of studies and study protocols that use the measure.

Methods

We conducted a systematic review of studies published between 2010 and October 2015 that evaluated interventions to facilitate SDM. The search strategy comprised three databases (EMBASE, PsycINFO, and Medline), reference tracking, citation tracking, and personal knowledge. Two independent reviewers screened titles and abstracts as well as full texts of potentially relevant records. We extracted the data using a pilot tested sheet, and we assessed the methodological quality of included studies using the Quality Assessment Tools from the U.S. National Institute of Health (NIH).

Results

Five completed studies and six study protocols fulfilled the inclusion criteria. The measure was used in a variety of health care settings, mainly in Europe, to evaluate several types of interventions. The reported mean sum scores ranged from 42 to 75 on a scale from 0 to 100. In four studies no significant change was detected in the mean-differences between main groups. In the fifth study the difference was small. Quality assessment revealed a high risk of bias in four of the five completed studies, while the study protocols received moderate quality ratings.

Conclusions

We found a wide range of areas in which the SDM-Q-9 and SDM-Q-Doc were applied. In the future this review may help researchers decide whether the measure fits their purposes. Furthermore, the review revealed risk of bias in previous trials that used the measure, and may help future trials decrease this risk. More research on the measure’s sensitivity to change is strongly suggested.

Introduction

Shared decision making (SDM) is promoted in many health care systems and is gaining importance internationally [13]. Reasons for these changes include patients’ expanding knowledge of diseases and treatments through media, increasing numbers of available treatment options, and patients’ and physicians’ preferences for more active patient involvement [48]. SDM involves at least one patient and one health care provider (HCP). Both parties take steps to actively participate in the process of decision making, share information and personal values, and together arrive at a treatment decision with shared responsibility.

SDM is indicated if there are multiple possible treatments and the alternatives have different and uncertain outcomes, as is the case in most chronic diseases [912], or if the treatment outcome is considered subjectively important [1315]. SDM can help patients and HCPs reach treatment agreement in long-term decisions [9, 14]. Greater patient involvement in treatment decisions is associated with less decisional conflict, which can be viewed as a moderator for patient satisfaction [16]. SDM is associated with feelings of autonomy, control, and individual competence [17]. Still, more research is needed on the general effects of SDM [18]. Interventions to facilitate SDM are becoming increasingly important, and their results need to be assessed and measured.

Measurements for SDM can be categorised by decision antecedents (e.g., role preference), the decision process (e.g., observed or perceived behaviour of the clinician), or decision outcomes (e.g., decisional conflict, decisional regret, satisfaction)[16]. The SDM process can be assessed by an external observer, the patient, or the physician; a complete overview is given in a 2010 review [19]. The OPTION ("observing patient involvement") scale is the most prominent instrument for assessing the extent to which clinicians actively involve patients in decision-making [20]. Due to several shortcomings this scale was recently revised to a short form that assesses the SDM process from an observer’s perspective in just five items [21]. Furthermore, several measures exist to assess the patient’s perspective. Among the most well known are the Perceived Involvement in Care (PICS) scale [22] and the recently developed ColloboRATE measure [23]. Although SDM is conceptualized as a process involving both the health care provider and the patient, only a few scales are available that assess SDM from both the patient’s and the physician’s points of view: the dyadic OPTION scale [24], the MAPPIN’SDM measure [25] and the 9-item Shared Decision Making Questionnaire (SDM-Q-9), published in 2010 [11]. Of the three measures, the SDM-Q-9 is used increasingly often to assess interventions aiming to improve SDM. This is likely due to its psychometric testing, acceptance, and feasibility of administration with only nine items [19]. The SDM-Q-9 is a patient-reported measure that focuses on the decisional process by rating physicians’ and patients’ behaviour in medical encounters. It was developed as a revision of the original Shared Decision Making Questionnaire (2006) [11]. The research team (including several of the authors of this manuscript, i.e. LK, MH, and IS) [11] generated a new core set of items based on the model by Elwyn et al. (2000) [26], from which nine items were selected via statistical analysis. The measured construct was found to be largely unidimensional. The answering scale was adjusted from 4-point to 6-point ratings with extremes (“completely disagree” to “completely agree”) to counter high ceiling effects [11]. The SDM-Q-9 showed good internal consistency (α = .94) and high face and structural validity in its first psychometric testing in a large (N = 2,351) primary care sample [11]. The same core research team created the physician version of the SDM-Q-9, the SDM-Q-Doc, which measures the same aspects of SDM, but from the physician’s perspective [27]. They maintained similar wording and used the same 6-point Likert scale as response format. Psychometric testing showed a high level of acceptance, with 93% completion rate for all items. The item-difficulty ranged from 3.52 to 4.34 on a scale from 0 to 5. The scale showed a good internal consistency (α = .88) and a good model-fit in a confirmatory factor analysis. [27]. With the quick and easy to answer SDM-Q-9 and SDM-Q-Doc, a dyadic (bi-perspective) measurement of SDM became possible [27].

The SDM-Q-9 was translated into English [11, 27], allowing for use in international research. The English version was tested in a stratified primary care sample (N = 488) in the U.S and confirmed a unidimensional structure and high internal consistency [19]. Further psychometric testing of the English version in a representative sample of the US population (N = 1,341) revealed discriminative validity of the SDM-Q-9, which had not been tested before [23]. A range of further translations have been conducted (see www.sdmq9.org). Several of the translations have undergone psychometric testing. In a Dutch psychometric study, both the SDM-Q-9 (sample of N = 182 outpatients) and the SDM-Q-Doc (sample of 43 primary care physicians and specialists rating N = 201 consultations) showed good reliability and convergent validity [28]. Factor analysis showed difficulties with integrating item 1 (“My doctor made clear that a decision needs to be made”) into the one-component model found by the original authors [28]. Psychometric testing of the Spanish version [29] in a sample of primary care patients with chronic conditions (N = 540) also yielded good reliability, while indicating that the best model fit was found when excluding item 1, which is consistent with the Dutch results. Furthermore, testing of the Persian version of the SDM-Q-Doc showed good reliability in a sample of hospital doctors [30]. Finally, a recent psychometric testing of the Hebrew version in a sample of mental health patients (N = 101) showed good reliability, convergent validity, a one factorial structure, and sensitivity to change [31]. While results consistently show good reliability, as well as good evidence for convergent validity, results regarding the factorial structure indicate mixed findings for item 1. Furthermore, initial studies indicating discriminative validity [23] and sensitivity to change [31], need to be confirmed by further studies. The availability of the measure in multiple languages with a relatively large amount of psychometric testing broadened the possibilities of its use in different health care systems. This may allow for examination of cross-country effects in the near future. So far, no systematic review gives an overview on the use of the 9-item Shared Decision Making Questionnaire in intervention studies.

The aims of this systematic review were to 1) evaluate the use of the SDM-Q-9 and SDM-Q-Doc in intervention studies on SDM interventions in clinical settings, 2) describe how the SDM-Q-9 and SDM-Q-Doc performed regarding sensitivity to change, and 3) investigate the methodological quality of studies and study protocols using the measure.

Methods

Before starting with the systematic review, the authors drafted a protocol for their own use. The protocol was not registered or published. The content of the protocol is equivalent to the content of the methods described in this paper. The PRISMA checklist of the review can be found in S8 Table.

Search strategy

We performed an electronic literature search in the databases EMBASE, PsycINFO, and Medline. We included all articles published between January 2010, the year in which the 9-item Version of the Shared Decision Making Questionnaire (SDM-Q) [11] was published, and October 13th, 2015. We devised a search strategy for this primary search encompassing all possible variations of the name of the measure. The detailed lists of keywords used can be found in the S1 Appendix. Eligibility criteria are displayed in Table 1. We performed a secondary search via the Web of Knowledge and Google Scholar including citation tracking of the original articles on the SDM-Q-9 and SDM-Q-Doc [11, 27] as well as on articles on the validation of other language versions of the questionnaire [28, 29]. We performed additional reference tracking on reviews of SDM intervention studies[3234]. Furthermore, we contacted researchers known to be working with the measure (based on requests from the developers) to ask if they had published work using either instrument. Finally, we sent an open request for studies using the SDM-Q-9 and/or SDM-Q-Doc to a social media SDM interest group.

Study selection

We imported all identified records into a reference management software. After removal of duplicates, HD and IS performed an independent title and abstract screening to check for potential inclusion of records. A record was included into the next step if at least one reviewer deemed it appropriate. The full texts of the potentially relevant records were assessed independently for eligibility by HD and IS. In the case of disagreement, it was planned to discuss the respective full text with a third reviewer. However, no disagreement occurred during full-text screening.

Data extraction

Preliminary data extraction sheets were developed by HD, discussed with IS and pilot tested by HD. HD extracted information on descriptive data of the included studies and protocols, e.g. study aims, study designs, health care settings, samples, evaluated interventions, statistical analyses, results, and interpretations. For complete data extraction sheets please see S6 Table and S7 Table. The final data extraction was conducted by one reviewer (HD) for two reasons: a) pilot testing revealed that this strategy was feasible, and b) the review team faced limited resources for data extraction.

Considering the substantial clinical and methodological heterogeneity of the set of included studies, we decided that they estimated the same parameter of interest broadly rather than specifically. This implies that a meta-analytic effect estimate would likely to be prone to numerous sources of bias. We decided that under theses circumstances a narrative-qualitative summary was more appropriate than a meta-analysis [35].

Quality assessment

Study quality was assessed using the Quality Assessment Tools from the Risk Assessment Workgroup (2013) of the U.S. Department of Health and Human Services from the U.S. National Institute of Health (NIH) [36]. These tools were constructed to assess the internal validity of a trial, the extent to which the reported effects can truly be attributed to the intervention utilized, and the potential flaws in methodology or implementation. The reviewer can select from the response options “yes”, “no”, or “cannot determine (CD)/not reported (NR)/not applicable (NA)”. Studies are judged to be of “good”, “fair” or “poor” quality. In the present review, the tools for before-after (pre-post) studies with no control group, controlled intervention studies, and observational-cohort and cross-sectional studies were used for independent quality appraisal by HD and EC. Differences in ratings were resolved by discussion until an agreement was reached.

After rating one study and one study protocol, it became apparent that the tools needed to be slightly adapted in wording for the rating of the study protocols, (e.g., from past tense to future tense). Three criteria of the assessment tool for controlled intervention studies were left out in the rating of study protocols, as they were inapplicable for protocols (e.g. drop-out rates). Likewise, it became evident that the tool for controlled intervention studies was not sufficient for the quality assessment of cluster randomised controlled trials (cRCTs), as it was developed for individually randomised controlled trials (RCTs). We adapted the tool for cluster randomisation by adding five items, based on literature on the methodology of cRCTs [3743] (see S1 Table).

Additionally, since blinding for HCPs was seldom feasible in cRCTs, item 4 assessing the blinding of participants and HCPs was divided into two items: 4a) participants and 4b) HCPs. As this review focuses on the SDM-Q-9 and SDM-Q-Doc, item 5, which considers whether the researchers assessing the outcomes are blinded to the participants’ group assignments, was changed to ascertain whether the patients or HCPs filling in the SDM-Q-9 and/or SDM-Q-Doc were blinded to the treatment group assignments. Finally, we left out item 11, which was not applicable for the aims of this review. See S1 to S5 Tables for final items.

All changes were pilot tested independently by HD and EC. Differing judgments were resolved by discussion.

Results

Literature search and study selection

After removal of duplicates 184 records underwent title and abstract screening, which led to the exclusion of 104 records. The full texts of the remaining 80 records were assessed for eligibility. A total of 69 records were excluded after applying the inclusion and exclusion criteria (see Table 1). As a result, we included 6 study protocols and 5 original studies in this review, for a total of 11 records. As is shown in Table 1 most of the records were excluded because they did not use the SDM-Q-9 and/or SDM-Q-Doc in their study (N = 52). The overview of the procedure is given in the flow diagram, Fig 1.

Description of included original studies

The characteristics of the original studies are displayed in Table 2 and Table 3. Three of the five included studies were cRCTs [4446]. All but one study [47] were done in Germany. The studies were conducted in different settings and different decisional contexts. All studies had at least two measurement time points. Two of five used both measures, SDM-Q-9 and SDM-Q-Doc [45, 47] and two studies [44, 45] reported adaptation of the questionnaire for all health care professionals (HCPs). Three of five studies reported applying the measure directly after the clinician-patient-consultation [4547]. While one study evaluated an intervention on both patients and physicians (decision aid & training) [47], four studies evaluated training programs for HCPs only. The sample sizes ranged from N = 51 patients to N = 2,188 patients, and mean ages ranged from 42.8 to 65.0 years. The highest percentage of women per group was 80% [47] and the lowest was 33% [45]. The HCP samples were described in less detail; the studies by Körner et al. reported on age and gender [44, 45]. The reported mean sum scores of the SDM-Q-9 and SDM-Q-Doc ranged from 42 to 75 on a scale from 0 to 100. Three studies did not find a significant intervention effect and concluded that the investigated interventions were ineffective [4648]. Körner et al. 2012 found no overall intervention effect, but subgroup analyses revealed highest effects for female HCPs and for nurses [44]. Körner et al. 2014 found a small intervention effect for staff, which was highest for nurses, as well [45]. For complete data extraction sheet of original studies please see S6 Table.

thumbnail
Table 2. Characteristics of the included original studies.

https://doi.org/10.1371/journal.pone.0173904.t002

thumbnail
Table 3. Characteristics of the included original studies (continued).

https://doi.org/10.1371/journal.pone.0173904.t003

Description of included study protocols

The description of the included study protocols can be viewed in Table 4 and Table 5. Four of six protocols described cRCTs [4952]. Three studies are planned to be conducted in Germany [5052]. The studies will be conducted in various health care settings. Two of six studies will use both SDM-Q-9 and SDM-Q-Doc [49, 53]. There will be one adaptation of the instrument for a patient’s companion [54] and one for an observer’s perspective [49]. One study protocol reported an assessment of SDM-Q-9 directly after the clinician-patient-consultation [54]. Two studies will assess the SDM-Q-9 as primary outcomes [49, 53]. There will be different forms of interventions, decision aids, and trainings, and most will aim both at physicians and patients [49, 50, 5254]. While all six studies will have clustering on the clinic- or practice-level, three took clustering into account in their reported sample size calculation [49, 50, 52] and two in their planned statistical analyses [50, 51]. For the complete data extraction sheet of study protocols please see S7 Table.

thumbnail
Table 4. Characteristics of the included study protocols.

https://doi.org/10.1371/journal.pone.0173904.t004

thumbnail
Table 5. Characteristics of the included study protocols (continued).

https://doi.org/10.1371/journal.pone.0173904.t005

Methodological quality of included original studies

In summary, four original studies were rated “poor” [4447] and one was rated as “fair” [48] (see S1 to S3 Tables).

The drop-out rate of the intervention gsroup participants exceeded 20% in all controlled intervention studies, which is viewed as a ‘fatal flaw’, resulting in a “poor” rating [4446] (S1 Table). The randomisation process was described in one study [45]. Neither of the studies conducted independent recruitment of participants or blinding of HCPs. The differential drop-out rate between intervention and control group was over 15% in two studies [44, 45], which is also considered a ‘fatal flaw’. Data on adherence to the intervention protocol or the utilization of other interventions were not reported [4446]. Furthermore, none of the three cluster randomised trials reported a sufficiently large sample size necessary for detecting effects with ≥80% power [4446]. One study controlled for baseline imbalances, took clustering effects into account in sample size calculation and statistical analysis of endpoints, and also explicitly reported an intention-to-treat analysis [46].

The quality of the implementation study with a historical control group was rated “poor” (S2 Table) as neither blinding of participants nor multiple times of measurement were reported. In addition, the intervention was not delivered consistently across the study population. All other criteria could be answered with “Yes”.

The quasi-experimental controlled cohort study [48] received an overall “fair”-rating (S3 Table). The participation rate of eligible persons was <50% and the loss to follow-up after baseline >20%. Criteria 6, 8, 9 and 10 were rated as not applicable. Blinding of the outcome assessors was not reported. All other criteria were fulfilled.

Quality of included study protocols

In summary five study protocols were rated as “fair” [4953] and one as “good” [54] (see S4 Table and S5 Table).

The assessment tool for controlled intervention studies was utilized for one RCT-protocol [54] which received a “good” rating and four cRCT-protocols [4952] which were rated “fair” (S4 Table). One cRCT-protocol did not use the term “cluster” in the description of the study design, did not take cluster-effects into account in the sample size calculation, and did not pre-specify outcomes [51]. Two out of five protocols did not report randomisation processes [49, 50], and three did not report on allocation concealment [49, 50, 54]. Blinding of participants was planned by one protocol [51], while two others did not report on this [50, 54]. Blinding of HCPs was planned in two studies [51, 54]. One of four cRCT-protocols reported independent recruitment of participants, [52] and one planned blinded assessment of outcomes [51]. Two protocols reported plans to ascertain baseline similarities of samples [51, 54] whereas one cRCT-protocol planned adjustment for baseline imbalances [52]. No protocol addressed utilization of other interventions. All study protocols included a sample size calculation and all four cRCTs regarded cluster effects in planned statistical analyses. All but one study protocol [54] planned analyses according to the intention-to-treat principle.

The protocol of a pre-post-implementation study with a historical control group [53] received a “fair” rating, (S5 Table) as no planned inference statistics were reported and the measurement of outcome variables was not planned for multiple times before and after implementation of the intervention. Furthermore, there was no information on blinding of people assessing outcomes. All other criteria were fulfilled.

Discussion

This systematic review aimed to 1) examine the use of the SDM-Q-9 and -Doc in intervention studies on SDM in clinical settings, 2) describe how the SDM-Q-9 and–Doc performed regarding sensitivity to change, and 3) assess the methodological quality of studies and study protocols using the measure. Five studies and six study protocols were included in this review.

Most reported trials were conducted in Europe. Four studies used both the SDM-Q-9 and SDM-Q-Doc [45, 47, 49, 53], whereas all others used the SDM-Q-9 only. In four trials the measure was adapted for other participants [44, 45, 49, 54], and seven of the included trials used it to assess primary outcomes. [44, 45, 4749, 53]. Our results reveal a range of the measure’s application areas, although many studies assessed SDM in primary care settings [46, 4850]. Moreover, the SDM-Q-9 and -Doc was applied to evaluate diverse interventions facilitating SDM, but was mainly used to assess training programs for HCPs and/or decision aids.

The reported mean sum scores ranged from 42 to 75 on a scale from 0 to 100. There were no significant changes detected in the mean-differences between intervention and control groups in four of five studies, and the detected difference in the fifth study [45] was small in size. This could hint at deficiencies of the sensitivity to change of the SDM-Q-9 and -Doc. However, several other explanations for this finding are also possible. First, the duration of the evaluated interventions was only reported by two studies [46, 48], both of which were relatively brief. The intervention dose might have been too little to accomplish behavior change. Research shows various barriers that need to be addressed for successful changes in behavior [5557]. Positive attitude towards SDM do not automatically result in implementation into practice [58]. Furthermore, interventions targeting both patients and HCPs have been found to be more effective than single-target interventions. Thus, it is possible that some interventions did not succeed in implementing SDM. Second, two studies did not report direct assessment of the SDM-Q-9 and -Doc after the relevant consultation [44, 48], which leaves room for bias of effects by uncontrolled influences. Third, few original studies described the HCP sample characteristics, and they did not control for those variables athough there is evidence of their influence on SDM [59]. Thus, the results of this review do not allow us to draw firm conclusions on the measure’s sensitivity to change. A psychometric study focusing on the measure’s sensitivity to change is strongly recommended. Such a study could also investigate whether response formats other than the present 6-point Likert scale, can increase sensitivity to change.

Study quality, as measured by the Quality Assessment Tools from the U.S. National Institute of Health, was assessed for seven cRCTs, one RCT, two pre and post-implementation studies and one quasi-experimental controlled cohort study. Of the original studies, only the quasi-experiment was rated “fair” with some risk of bias [48]. All others received a “poor” rating, as they had ‘fatal flaws’ with high risk of bias to their internal validity [4447]. Admittedly, the “fair” rating has to be handled with caution, as the quality assessment instrument did not completely fit the study design. Quality of the rated study protocols was slightly better, with five “fair” ratings [4953] and one “good” rating for the RCT [54] with very low risk of bias. This might be due to the fact that not all items could be applied to those trials. As protocols do not contain results, they leave less room for possible flaws, especially as many original studies were rated poor due to a high drop-out rate, which cannot be rated for study protocols. Even so, there was a great difference in detail and completeness of methodological description between study protocols and original studies. This could also be explained by gradually higher adherence to research and reporting guidelines over time, leading to slightly better rating for the more recent study protocols. Still, even the more detailed methodology descriptions of protocols did not always satisfy the criteria regarding randomisation. The definition of ‘fatal flaw’ as high drop-out rate (>20% drop-out at endpoint in the intervention group) might be unlikely to be fulfilled in health care research studies under routine conditions. Especially in primary care, many factors aside from intervention effects could influence follow-up rates, as there are practical reasons for changing one’s general practitioner (e.g. move into another area). The difficulty of blinding HCPs to treatments when evaluating trainings in SDM for HCPs should also be taken into account. The criteria from the risk of bias tool for observational cohort and cross-sectional studies demanding 50% participation of the eligible population and ≤20% loss to follow-up after baseline seems difficult to achieve considering clinical care population sizes, return rates of postal recruitment and repeated measurements. For example, Tinsel et al. (2013) report that loss to follow-up was generally higher in primary care studies with long-term follow-up [46]. In many of the included studies recruitment was done by the general practitioner (GP). However, recruitment by GPs is found to be less successful and trials’ general success might even decrease if the GPs’ alertness during consultations is essential [60], which is undoubtedly the case for SDM. Consequently, ratings of original studies might have been better with less strict criteria. Despite the range of factors that can explain the quality ratings, the overall quality of included intervention studies must be summarised as tenuous and the quality of intervention study protocols as moderate.

The SDM-Q-9 and -Doc are relatively young instruments, and translating them, conducting a study, and publishing data take time. Some excluded articles in our screenings still used the first version from 2006 [61]. More research with the measure is underway, so feedback from different researchers and results from the included protocols are yet to come. There were more than twenty articles found in the screenings that utilised the measure for other purposes, such as validating new SDM measures or simply to assess the status of SDM in a clinical setting. An update of the present systematic review in a couple of years would certainly be helpful to draw better conclusions from a larger number of studies on the measure’s sensitivity to change.

There are several strengths and limitations of the present systematic review. One strength is a comprehensive database search combined with a comprehensive secondary search. Another strength is that the title and abstract screenings as well as the full text screenings were done by two independent reviewers for all articles. The same applies to the conducted quality assessment. A main limitation is that the data extraction was performed by only one reviewer, which lends room to possible bias. It must be noted that only results of five completed studies could be assessed, which might decrease the generalisability of the review’s conclusions. Furthermore, this review focused on adult patients, mainly because the 9-item Shared Decision Making Questionnaire is designed for use in adult populations. However, the use of SDM in pediatric populations is a growing area of clinical and research interest. Thus, the adaptation of the measure for use in this setting could also be an area of future research.

In conclusion, the identified records showed a range of the measure’s application in different health care settings and its use to evaluate diverse interventions. We found the included studies to be of limited methodological quality. Our results also suggest that future articles on original studies should describe the methodology and interventions in more detail. Research ought to assess HCP characteristics more thoroughly, conduct independent recruitment, and control for actual implementation of SDM. Future trials ought to either contemplate randomisation at patient-level, or correct for clustering effects in cRCT sample size calculations and statistical analyses. The SDM-Q-9 and -Doc’s sensitivity to change remains unclear. It is uncertain whether the measure does not assess changes or if there were no changes in perceived SDM. Therefore, it might be advisable to combine the SDM-Q-9 and -Doc with an observer based measure of SDM, as Scholl and colleagues have found that the patient-reported measure does not correlate significantly with an observer-based instrument [62]. Likewise, a combination with instruments assessing actual change in patient and HCP behavior regarding SDM in future studies seems reasonable. The heterogeneity of trials examining interventions facilitating SDM is vast and makes comparisons and examination of perceived SDM difficult.

This review may help researchers decide whether the measure fits their purposes. Furthermore, it shows risks of bias in previous trials which used the measure and may help prospective researchers to decrease these risks. Also, more research on the measure’s sensitivity to change is strongly suggested before using it in further intervention studies.

Supporting information

S1 Appendix. Electronic data base search strategy for EMBASE, PsycINFO, Medline.

NR = not reported, NA = not applicable; sources for added criteria: 1.1 [51], 4.1 [52, 53], 6.1 [51, 5457], 12.1 a) & b) [51, 5759], + To understand the trial procedure of Körner et al. 2014, the article of Körner et al. 2012 needed to be consulted as the description of methodology and terminology was otherwise unclear for two reviewers.

https://doi.org/10.1371/journal.pone.0173904.s001

(DOCX)

S1 Table. Quality assessment of controlled intervention studies (original studies).

https://doi.org/10.1371/journal.pone.0173904.s002

(DOCX)

S2 Table. Quality assessment for before-after studies (original studies).

https://doi.org/10.1371/journal.pone.0173904.s003

(DOCX)

S3 Table. Quality assessment for observational cohort and cross-sectional studies (original studies).

NR = not reported, NA = not applicable.

https://doi.org/10.1371/journal.pone.0173904.s004

(DOCX)

S4 Table. Quality assessment of controlled intervention studies (study protocols).

CD = cannot determine, NR = not reported, NA = not applicable; sources for added criteria: 1.1 [51], 4.1 [52, 53], 6.1 [51, 5457], 12.1 a) & b) [51, 5759].

https://doi.org/10.1371/journal.pone.0173904.s005

(DOCX)

S5 Table. Quality assessment for before-after-studies (study protocols).

NR = not reported

https://doi.org/10.1371/journal.pone.0173904.s006

(DOCX)

S6 Table. Data extraction sheet for original studies.

https://doi.org/10.1371/journal.pone.0173904.s007

(DOCX)

S7 Table. Data extraction sheet for study protocols.

https://doi.org/10.1371/journal.pone.0173904.s008

(DOCX)

Acknowledgments

We would like to thank Alice Diesing for her help in the preparation of the manuscript, and Allison LaRussa for copyediting the manuscript.

Author Contributions

  1. Conceptualization: HD IS LK MH.
  2. Data curation: HD EC.
  3. Formal analysis: HD EC IS.
  4. Investigation: HD EC IS.
  5. Methodology: HD IS EC LK.
  6. Project administration: HD IS.
  7. Resources: HD EC IS MH.
  8. Supervision: IS LK MH.
  9. Validation: HD EC IS.
  10. Visualization: HD IS.
  11. Writing – original draft: HD IS.
  12. Writing – review & editing: HD EC LK MH IS.

References

  1. 1. Légaré F, Ratté S, Gravel K, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: Update of a systematic review of health professionals’ perceptions. Patient Educ Couns. 2008;73(3): 526–535. pmid:18752915
  2. 2. Härter M, van der Weijden T, Elwyn G. Policy and practice developments in the implementation of shared decision making: An international perspective. Z Evid Fortbild Qual Gesundhwes. 2011;105(4): 229–233. pmid:21620313
  3. 3. Coulter A, Härter M, Moumjid-Ferdjaoui N, Perestelo-Perez L, Van Der Weijden T. European experience with shared decision making. 2015.
  4. 4. Hamann J, Mendel R, Bühner M, Kissling W, Cohen R, Knipfer E, et al. How should patients behave to facilitate shared decision making–The doctors’ view. Health Expect. 2012;15(4): 360–366. pmid:21624024
  5. 5. Coulter A, Jenkinson C. European patients' views on the responsiveness of health systems and healthcare providers. Eur J Public Health. 2005;15(4): 355–360. pmid:15975955
  6. 6. Say R, Murtagh M, Thomson R. Patients’ preference for involvement in medical decision making: A narrative review. Patient Educ Couns. 2006;60(2): 102–114. pmid:16442453
  7. 7. Chewning B, Bylund CL, Shah B, Arora NK, Gueguen JA, Makoul G. Patient preferences for shared decisions: A systematic review. Patient Educ Couns. 2012;86(1): 9–18. pmid:21474265
  8. 8. Adams JR, Drake RE, Wolford GL. Shared decision-making preferences of people with severe mental illness. Psychiatr Serv. 2007;58(9): 1219–1221. pmid:17766569
  9. 9. Simon D, Loh A, Härter M. Grundlagen der partizipativen Entscheidungsfindung und Beispiele der Anwendung in der Rehabilitation. Die Rehabilitation. 2008;47(02): 84–89.
  10. 10. Charles C, Gafni A, Whelan T. Shared decision-making in the medical encounter: What does it mean? (or it takes at least two to tango). Soc Sci Med. 1997;44(5): 681–692. pmid:9032835
  11. 11. Kriston L, Scholl I, Holzel L, Simon D, Loh A, Harter M. The 9-item Shared Decision Making Questionnaire (SDM-Q-9). Development and psychometric properties in a primary care sample. Patient Educ Couns. 2010;80(1): 94–99. pmid:19879711
  12. 12. Makoul G, Clayman ML. An integrative model of shared decision making in medical encounters. Patient Educ Couns. 2006;60(3): 301–312. pmid:16051459
  13. 13. Eich W. Shared decision making in Medizin und Psychotherapie. Psychotherapie im Dialog. 2009;10(4): 364–368.
  14. 14. Rockenbauch K, Schildmann J. Shared decision making (SDM): A systematic survey of terminology use and concepts. Gesundheitswesen. 2011;73(7): 399–408. pmid:20859849
  15. 15. Légaré F, Politi MC, Drolet R, Desroches S, Stacey D, Bekker H. Training health professionals in shared decision-making: an international environmental scan. Patient Educ Couns. 2012;88(2): 159–169. pmid:22305195
  16. 16. Joosten EAG, DeFuentes-Merillas L, De Weert GH, Sensky T, Van Der Staak CPF, de Jong CAJ. Systematic review of the effects of shared decision-making on patient satisfaction, treatment adherence and health status. Psychother Psychosom. 2008;77(4): 219–226. pmid:18418028
  17. 17. Hölzel LP, Kriston L, Härter M. Patient preference for involvement, experienced involvement, decisional conflict, and satisfaction with physician: A structural equation model test. BMC Health Serv Res. 2013;13(231):
  18. 18. Elwyn G, Frosch D, Thomson R, Joseph-Williams N, Lloyd A, Kinnersley P, et al. Shared decision making: A model for clinical practice. J Gen Intern Med. 2012;27(10): 1361–1367. pmid:22618581
  19. 19. Scholl I, Koelewijn-van Loon M, Sepucha K, Elwyn G, Légaré F, Härter M, et al. Measurement of shared decision making—A review of instruments. Z Evid Fortbild Qual Gesundhwes. 2011;105(4): 313–324. pmid:21620327
  20. 20. Nicolai J, Moshagen M, Eich W, Bieber C. The OPTION scale for the assessment of shared decision making (SDM): methodological issues. Z Evid Fortbild Qual Gesundhwes. 2012;106(4): 264–271. pmid:22749073
  21. 21. Barr PJ, O'Malley AJ, Tsulukidze M, Gionfriddo MR, Montori V, Elwyn G. The psychometric properties of Observer OPTION(5), an observer measure of shared decision making. Patient Educ Couns. 2015;98(8): 970–976. pmid:25956069
  22. 22. Lerman CE, Brody DS, Caputo GC, Smith DG, Lazaro CG, Wolfson HG. Patients' Perceived Involvement in Care Scale: relationship to attitudes about illness and medical care. J Gen Intern Med. 1990;5(1): 29–33. pmid:2299426
  23. 23. Barr PJ, Thompson R, Walsh T, Grande SW, Ozanne EM, Elwyn G. The psychometric properties of CollaboRATE: a fast and frugal patient-reported measure of the shared decision-making process. J Med Internet Res. 2014;16(1): e2. pmid:24389354
  24. 24. Melbourne E, Roberts S, Durand MA, Newcombe R, Legare F, Elwyn G. Dyadic OPTION: Measuring perceptions of shared decision-making in practice. Patient Educ Couns. 2011;83(1): 55–57. pmid:20537837
  25. 25. Kasper J, Hoffmann F, Heesen C, Kopke S, Geiger F. MAPPIN'SDM—the multifocal approach to sharing in shared decision making. PLoS ONE. 2012;7(4): e34849. pmid:22514677
  26. 26. Elwyn G, Edwards A, Kinnersley P, Grol R. Shared decision making and the concept of equipoise: The competences of involving patients in healthcare choices. Br J Gen Pract. 2000;50(460): 892–899. pmid:11141876
  27. 27. Scholl I, Kriston L, Dirmaier J, Buchholz A, Harter M. Development and psychometric properties of the Shared Decision Making Questionnaire-Physician version (SDM-Q-DOC). Patient Educ Couns. 2012;88(2): 284–290. pmid:22480628
  28. 28. Rodenburg-Vandenbussche S, Pieterse AH, Kroonenberg PM, Scholl I, van der Weijden T, Luyten GPM, et al. Dutch translation and psychometric testing of the 9-Item Shared Decision Making Questionnaire (SDM-Q-9) and Shared Decision Making Questionnaire-Physician Version (SDM-Q-Doc) in primary and secondary care. PLoS ONE. 2015;10(7): e0132158. pmid:26151946
  29. 29. De las Cuevas C, Perestelo-Perez L, Rivero-Santana A, Cebolla-Marti A, Scholl I, Härter M. Validation of the Spanish version of the 9-item Shared Decision-Making Questionnaire. Health Expect. 2014;18(6): 2143–2153. pmid:24593044
  30. 30. Ebrahimi MAH, Hajebrahimi S, Mostafaie H, Pashazadeh F, Hajebrahimi A. Physicians' perspectives toward shared decision making in developing countries Br J Med Med Res. 2014;4(18): 3458–3464.
  31. 31. Zisman-Ilani Y, Roe D, Scholl I, Härter M, Karnieli-Miller O. Shared decision making during active psychiatric hospitalization: Assessment and psychometric properties. Health Commun. 2016: 1–5.
  32. 32. Légaré F, Stacey D, Turcotte S, Cossi M-J, Kryworuchko J, Graham ID, et al. Interventions for improving the adoption of shared decision making by healthcare professionals. Cochrane Database Syst Rev. 2014(9):
  33. 33. Lenz M, Buhse S, Kasper J, Kupfer R, Richter T, Mühlhauser I. Decision aids for patients. Dtsch Arztebl. 2012;109(22–23): 401–408.
  34. 34. Stacey D, Légaré F, Col NF, Bennett CL, Barry MJ, Eden KB, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014:
  35. 35. Kriston L. Dealing with clinical heterogeneity in meta-analysis. Assumptions, methods, interpretation. Int J Methods Psychiatr Res. 2013;22(1): 1–15. pmid:23494781
  36. 36. U.S. Department of Health and Human Services, National Institutes of Health, National Heart Lung and Blood Institute (NIH). Assessing cardiovascular risk—Systematic evidence review from the risk assessment work group. Available: http://wwwnhlbinihgov/health-pro/guidelines/in-develop/cardiovascular-risk-reduction/risk-assessment. 2013;Accessed on July, 29 2016.
  37. 37. Donner A, Klar N. Issues in the meta-analysis of cluster randomized trials. Stat Med. 2002;21(19): 2971–2980. pmid:12325113
  38. 38. Elbourne DR, Campbell MK. Extending the CONSORT statement to cluster randomized trials: For discussion. Stat Med. 2001;20(3): 489–496. pmid:11180315
  39. 39. Eldridge SM, Ashby D, Feder GS, Rudnicka AR, Ukoumunne OC. Lessons for cluster randomized trials in the twenty-first century: A systematic review of trials in primary care. Clinical Trials. 2004;1(1): 80–90. pmid:16281464
  40. 40. Farrin A, Russell I, Torgerson D, Underwood M, Team UBT. Differential recruitment in a cluster randomized trial in primary care: The experience of the UK back pain, exercise, active management and manipulation (UK BEAM) feasibility study. Clinical Trials. 2005;2(2): 119–124. pmid:16279133
  41. 41. Harris JE. Macroexperiments versus microexperiments for health policy. Social Experimentation. Chicago: University of Chicago Press; 1985. pp. 145–186.
  42. 42. Hayes RJ, Bennett S. Simple sample size calculation for cluster-randomized trials. Int J Epidemiol. 1999;28(2): 319–326. pmid:10342698
  43. 43. Ukoumunne OC, Thompson SG. Analysis of cluster randomized trials with repeated cross-sectional binary measurements. Stat Med. 2001;20(3): 417–433. pmid:11180311
  44. 44. Körner M, Ehrhardt H, Steger A-K, Bengel J. Interprofessional SDM train-the-trainer program "Fit for SDM": Provider satisfaction and impact on participation. Patient Educ Couns. 2012;89(1): 122–128. pmid:22647558
  45. 45. Körner M, Wirtz M, Michaelis M, Ehrhardt H, Steger A-K, Zerpies E, et al. A multicentre cluster-randomized controlled study to evaluate a train-the-trainer programme for implementing internal and external participation in medical rehabilitation. Clin Rehabil. 2014;28(1): 20–35. pmid:23858525
  46. 46. Tinsel I, Buchholz A, Vach W, Siegel A, Dürk T, Buchholz A, et al. Shared decision-making in antihypertensive therapy: A cluster randomised controlled trial. BMC Fam Pract. 2013;14(135):
  47. 47. Brito JP, Castaneda-Guarderas A, Gionfriddo MR, Singh Ospina NM, Maraka S, Dean D, et al. Development and pilot testing of an encounter tool for shared decision making about the treatment of graves disease. Thyroid. 2015:
  48. 48. Hölzel LP, Vollmer M, Kriston L, Siegel A, Härter M. Patientenbeteiligung bei medizinischen Entscheidungen in der integrierten Versorgung Gesundes Kinzigtal: Ergebnisse einer kontrollierten Kohortenstudie. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2012;55(11): 1524–1533.
  49. 49. den Ouden H, Vos RC, Reidsma C, Rutten G. Shared decision making in type 2 diabetes with a support decision tool that takes into account clinical factors, the intensity of treatment and patient preferences: Design of a cluster randomised (OPTIMAL) trial. BMC Fam Pract. 2015;16:
  50. 50. Drewelow E, Wollny A, Pentzek M, Immecke J, Lambrecht S, Wilm S, et al. Improvement of primary health care of patients with poorly regulated diabetes mellitus type 2 using shared decision-making-The DEBATE trial. BMC Fam Pract. 2012;13:
  51. 51. Geiger F, Liethmann K, Hoffmann F, Paschedag J, Kasper J. Investigating a training supporting shared decision making (IT'S SDM 2011): Study protocol for a randomized controlled trial. Trials. 2011;12(232):
  52. 52. Löffler C, Drewelow E, Paschka SD, Frankenstein M, Eger J, Jatsch L, et al. Optimizing polypharmacy among elderly hospital patients with chronic diseases—Study protocol of the cluster randomized controlled POLITE-RCT trial. Implement Sci. 2014;9(151):
  53. 53. Savelberg W, Moser A, Smidt M, Boersma L, Haekens C, van der Weijden T. Protocol for a pre-implementation and post-implementation study on shared decision-making in the surgical treatment of women with early-stage breast cancer. BMJ Open. 2015;5(3): e007698. pmid:25829374
  54. 54. Goss C, Ghilardi A, Deledda G, Buizza C, Bottacini A, Del Piccolo L, et al. INvolvement of breast CAncer patients during oncological consultations: A multicentre randomised controlled trial—The INCA study protocol. BMJ Open. 2013;3(5): e002266. pmid:23645911
  55. 55. Pill R, Stott NC, Rollnick SR, Rees M. A randomized controlled trial of an intervention designed to improve the care given in general practice to type II diabetic patients: Patient outcomes and professional ability to change behaviour. Fam Pract. 1998;15(3): 229–235. pmid:9694180
  56. 56. Légaré F, O'Connor AM, Graham ID, Saucier D, Cote L, Blais J, et al. Primary health care professionals' views on barriers and facilitators to the implementation of the Ottawa Decision Support Framework in practice. Patient Educ Couns. 2006;63(3): 380–390. pmid:17010555
  57. 57. Grimshaw JM, Eccles MP, Walker AE, Thomas RE. Changing physicians' behavior: What works and thoughts on getting more things to work. J Contin Educ Health Prof. 2002;22(4): 237–243. pmid:12613059
  58. 58. Stevenson FA. General practitioners’ views on shared decision making: A qualitative analysis. Patient Educ Couns. 2003;50(3): 291–293. pmid:12900102
  59. 59. Floer B, Schnee M, Bcken J, Streich W, Kunstmann W, Isfort J, et al. Shared Decision Making. Gemeinsame Entscheidungsfindung aus der ärztlichen Perspektive. Medizinische Klinik. 2004;8(99): 435–440.
  60. 60. van der Wouden JC, Blankenstein AH, Huibers MJ, van der Windt DA, Stalman WA, Verhagen AP. Survey among 78 studies showed that Lasagna's law holds in dutch primary care research. J Clin Epidemiol. 2007;60(8): 819–824. pmid:17606178
  61. 61. Simon D, Schorr G, Wirtz M, Vodermaier A, Caspari C, Neuner B, et al. Development and first validation of the Shared Decision-Making Questionnaire (SDM-Q). Patient Educ Couns. 2006;63(3): 319–327. pmid:16872793
  62. 62. Scholl I, Kriston L, Dirmaier J, Härter M. Comparing the nine-item Shared Decision-Making Questionnaire to the OPTION Scale—An attempt to establish convergent validity. Health Expect. 2012;18(1).