Skip to main content

Main menu

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers

User menu

  • My alerts

Search

  • Advanced search
Annals of Family Medicine
  • My alerts
Annals of Family Medicine

Advanced Search

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers
  • Follow annalsfm on Twitter
  • Visit annalsfm on Facebook
Research ArticleMethodology

Rochester Participatory Decision-Making Scale (RPAD): Reliability and Validity

Cleveland G. Shields, Peter Franks, Kevin Fiscella, Sean Meldrum and Ronald M. Epstein
The Annals of Family Medicine September 2005, 3 (5) 436-442; DOI: https://doi.org/10.1370/afm.305
Cleveland G. Shields
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter Franks
MD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kevin Fiscella
MD, MPH
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sean Meldrum
MA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ronald M. Epstein
MD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF
Loading

Abstract

PURPOSE We wanted develop a reliable and valid objective measure of patient-physician collaborative decision making, the Rochester Participatory Decision-Making Scale (RPAD).

METHODS Based on an informed decision-making model, the RPAD assesses physician behavior that encourages patient participation in decision making. Data were from a study of physician-patient communication of 100 primary care physicians. Physician encounters with 2 standardized patients each were audio recorded, resulting in 193 useable recordings. Transcribed recordings were coded both with RPAD and the Measure of Patient-Centered Communication (MPCC), which includes a related construct, Finding Common Ground. Two sets of dependent variables were derived from (1) surveys of the standardized patients and (2) surveys of 50 patients of each physician, who assessed their perceptions of the physician-patient relationship.

RESULTS The RPAD was coded reliably (intraclass correlation coefficient [ICC] = 0.72). RPAD correlated with Finding Common Ground (r = 0.19, P <.01) and with the survey measures of standardized patient’s perceptions of the physician-patient relationship (r = 0.32 – 0.36 [P <.005]) but less with the patient survey measures (r = 0.06 to 0.07 [P <.005]). Multivariate, hierarchical analyses suggested that the RPAD made a more robust contribution to explaining variance in standardized patient perceptions than did the MPCC Finding Common Ground.

CONCLUSIONS The RPAD shows promise as a reliable, valid, and easy-to-code objective measure of participatory decision making.

  • Physician-patient relations
  • medical decision making
  • informatics

INTRODUCTION

Participatory decision making has been reported to affect health outcomes, including control of chronic disease1 and functional outcomes.2 Based on those early results and more recent studies that show a lack of patient involvement in decisions,3 physicians have been encouraged to adopt a more participatory style. Some consider that participatory decision making is a moral imperative in medicine without regard to its impact on outcomes.4 The outcomes of efforts to improve participatory decision making have been mixed; although effects on consultation style and satisfaction have been reported,5,6 effects on control of chronic disease have not been replicated.7 These studies have often relied on patient surveys to assess participatory decision making; a validated observational instrument would provide a more objective description of behaviors and reduce the likelihood of confounding by including both measures of participatory decision making and reported outcomes on the same patient survey.

Participatory decision making emerged in the 1970s as an alternative to a more traditional paternalistic model in which physicians made decisions for their patients8–12; initially it was influenced by consumerist and models of care, which suggest that patients have the right to information and self-determination.13,14 A contractual model elaborated on the consumerist model by emphasizing the importance of taking into account patients’ stated values to arrive at decisions.15 Participatory decision making is probably most closely related to a deliberative model in which physicians elicit and respect patients’ values, but physicians also offer expertise and recommendations, sometimes using persuasion to adopt healthier options if there is not initial consensus.13 Thus, participatory decision making consists of 2 processes: expert problem solving and decision making.16 Problem solving is the province of physicians whose expertise informs their judgment to determine treatment options. Decision making involves patients working with the physician to determine which treatment options best satisfy the patient’s preferences.

Measurement of the process of participatory decision making has been elusive. Patient surveys may not capture the level of detail to inform physician training interventions. Current interaction analysis systems, such as the Measure of Patient-Centered Communication (MPCC)17 and the Roter Interaction Analysis System (RIAS),18 offer some key behaviors that may be indicators of participatory decision making (patient question-asking), but not others.19 Braddock et al developed an instrument derived from a consensually derived set of behavioral criteria for “informed” decision making.3,20 Using their criteria, informed decision making occurs in only 9% of primary care office visits, raising concerns that physicians need to develop better skills in involving patients in their care.3 Despite its usefulness as a descriptive measure to define the conceptual domains of informed decision making, this instrument has some limitations; there is no overall scale score, and criterion validity has not been reported.

Many of the models described above focus on information sought, offered, and received. But participatory decision making also includes the responsiveness of physicians to a richer range of patient participation in decisions beyond assuring that patients have been informed. Using the Braddock et al scale as a starting point,3 we sought to develop a reliable and valid objective measure of physician behaviors that encourage participatory decision making. We developed new items and a simple method of scoring the scale to construct the Rochester Participatory Decision Making Scale (RPAD). While it is clear that patients also bring attitudes and behaviors that contribute to participatory decision making, our scale was developed to evaluate physician communication behavior and to be used for physician training purposes, rather than as a purely descriptive measure of conversational process. For this reason, we used unannounced and covert standardized patients to reduce patient variability so that we could observe the differences in physician participatory decision-making behavior when confronted with a nearly identical stimulus.

METHODS

The RPAD was developed as part of a larger study that examined the relationship between physicians’ communication behaviors and health care costs. The larger study involved audio recording and coding standardized patient visits to physicians, surveys of standardized patients (measuring their perceptions of the encounter), physician surveys (personality and demographics), patient surveys (measures of the patient-physician relationship, satisfaction, demographics, illness morbidity, physical and mental functioning), and claims data from a large managed care organization.

Research Participants

We had 3 sets of participants in this study: primary care physicians, standardized patients, and real patients. One hundred primary care physicians (internists and family physicians) who were members of the independent practice association of a managed care organization were recruited and enrolled in the study. Standardized patients made 2 unannounced, covert, audio-recorded visits to physicians. The first standardized patient role was constructed to mimic typical patients in primary care with straightforward symptoms of gastroesophageal reflux (GERD case). The second role was designed to simulate patients with medically unexplained symptoms so we could explore how physicians handle situations that involve potential disagreements about the meaning of symptoms, the diagnosis, and its treatment (ambiguous case). Two male and 3 female standardized patients were used. All visits were audio recorded with recorders hidden in purses and backpacks.

The order of standardized patient visits (male or female, role) was randomized for each physician. In the treatment and planning phase of the office visit, standardized patients were instructed to respond to physicians’ questions and to ask clarifying questions, but they were not to challenge directly the physician’s assessment. At one point during each visit, however, standardized patients were instructed to ask whether their symptoms could represent something serious so they could communicate to the physician a moderate level of anxiety. Thus, we sought to create typical patients in current primary care practice. Standardized patients participated in a pilot test to assure they were realistic, and we sought feedback from pilot physicians on whether the standardized patients seemed typical and ordinary.

Physicians completed questionnaires, and 50 visiting patients from each physician’s office were also recruited to complete questionnaires. We approached 4,963 eligible patients; 4,746 (95.6%) completed the questionnaire. The reasons for refusal were as follows: 185 patients stated that they disliked questionnaires, 109 refused because of illness, and 52 felt rushed. Demographic information on the physician and patient samples is contained in Tables 1⇓ and 2⇓.

View this table:
  • View inline
  • View popup
Table 1.

Characteristics of Physicians in Sample

View this table:
  • View inline
  • View popup
Table 2.

Characteristics of Patients Surveyed

Two days after the visit, a fax was sent to the physician to determine whether, when prompted, the physician could identify the standardized patient. The fax notified the physician that a standardized patient had visited in the past few days; the physicians were asked whether they suspected they had seen an standardized patient, and if so, to describe the patient and indicate how realistic the standardized patient portrayal was. Forty percent of physicians identified the standardized patients from this prompted recall.

Analysis of Audio-Recorded Encounters

Each standardized patient visit was recorded using a digital audio disk recorder with a high-quality microphone. Visit length was calculated (in minutes), excluding waiting time in the examining room before the visit and any period of more than 1 minute during which the physician left the room.

RPAD Scale Development

The RPAD was developed by incorporating items suggested by Braddock et al3 as indicative of physician behaviors that encourage patient participation in decision making. In developing the RPAD, we observed that some physician behaviors were performed fully, whereas others were completed only partially. This finding led us to create a coding scheme for each item that gave a score of 0 for no evidence of the behavior, ½ for partial presence of the behavior, and 1 for the full presence of the behavior (Table 3⇓). We developed a coding manual with descriptions and examples for each 0, ½, and 1 score to guide raters (available from the first author).

View this table:
  • View inline
  • View popup
Table 3.

Rochester Participatory Decision-Making Scale (RPAD)

We pilot tested the scale on 10 audio-recorded visits. We discontinued items that never received a code. We were left with 4 items; we then developed 5 more items and scoring criteria for each and pilot tested them. The final coding system is shown in Table 3⇑. The 10 visits we used to develop the scale were recoded after all other tapes had been coded and used as data in the analysis. We have included the discarded items in the Supplemental Appendix, available online only at http://www.annfammed.org/cgi/content/full/3/5/436/DC1.

Coders first listened to the entire audio recording and then listened again to code the instances of physician behaviors listed on the RPAD coding sheet. Each time they found an example, they stopped the tape and listened again to that section to determine whether the behavior deserved a 0, ½, or 1 full-point score.

The MPCC

We also coded using the MPCC,17 a measure of physician responsiveness to patient concerns, including participation in care. See the Supplemental Appendix for information about the MPCC.

Patient Survey

Patient questionnaires that were administered to 50 patients of each physician included 4 scales: the 5-item Health Care Climate Questionnaire (HCCQ),21, the Primary Care Assessment Survey (PCAS) knowledge and trust subscales,22,23 and a single-item satisfaction scale. Details can be found in the Supplemental Appendix.

Patient data for covariate adjustment were also collected, including demographics (age, sex, race/ethnicity, and educational level), health status medical and physical component scores of the SF-12 Health Survey (MCS-12 and PCS-12),24 SCL-90 (Symptom Checklist − 90) somatization score,25 11 patient-reported morbidities, and the length of the physician-patient relationship.

Standardized Patient Survey

The standardized patients also completed questionnaires after their visits with physicians. The HCCQ21 and the PCAS trust subscale were completed by both patients and standardized patients.22,23,26

Statistical Analysis

We examined the coding reliability of the RPAD by calculating the intraclass correlation coefficient (ICC). We also examined the case-to-case reliability of the RPAD coding of the 2 standardized patient cases as a measure of physician style using the Spearman Brown prophecy formula α = n*r/((1+ (n − 1)*r) (n = number of standardized patient cases and r = average correlation between cases). This formula treats the 2 cases as items in a scale assessing the physician’s style and calculates a coefficient of reliability. We then examined the relationship of RPAD with MPCC total score and its components. We expected the measures to be moderately related, but our primary hypothesis was that RPAD would correlate with Component 3, because MPCC measures physician-patient interaction around the delivery of the diagnosis and treatment plan. Finally, we examined the criterion validity by examining the relationship of RPAD with patients’ and standardized patients’ perceptions of their relationships with their physicians using multivariate methods. We were particularly interested in the contribution that the RPAD variable made to patient and standardized patient perceptions independent of the other objective measure of physician-patient interaction (MPCC). The multivariate analysis methods and the results are included in an online Supplemental Appendix.

RESULTS

We analyzed 193 audio recordings from 100 physician-patient encounters. Seven recordings were not available because of equipment failure (3 encounters); 4 physicians moved their practices before completion of the study. We averaged 49.4 (SD = 6) patient questionnaires from each physician’s office. Patients reported an average of 1.25 illnesses from a list of 13 commonly treated primary care conditions. (Detailed information on patient illnesses and health status is included in the on-line Supplemental Table 1, available online-only at http://www.annfammed.org/cgi/content/full/3/5/436/DC1.)

Reliability of the RPAD

The ICC for the RPAD was 0.72. Reliability for the RPAD as a measure of physician style, using the Spearman-Brown prophecy formula based on the 2 standardized patient encounters, was 0.53. Audio-recorded encounters took approximately 50 minutes to code; 20 minutes were spent first listening to the tape, and another 30 minutes to code the 20 minutes of the recording.

RPAD Distribution and Scoring

Table 4⇓ shows the distribution of scores on the RPAD. Each item was scored 0, ½, or 1, but when averaged over 2 cases, the scores also included ¼ and ¾. Almost 70% of the physicians gave a clear description of the clinical problem, though 53% did not discuss uncertainties in any way. Almost all the physicians attempted to clarify agreement on the diagnosis and treatment plan; 98% had at least a score of ½ or higher. Most physicians, 93%, did not discuss barriers to carrying out the treatment plan. The bulk of patients, 92%, were given some opportunity to ask questions. Most of the time, physician language matched the patients’. More than 25% of the time, physicians asked whether patients had any questions. A small percentage of physicians used open-ended questions, and a similarly small percentage checked patients’ understanding.

View this table:
  • View inline
  • View popup
Table 4.

Rochester Participatory Decision-Making Scale (RPAD) Descriptive Statistics

Correlations of RPAD with MPCC, Physician Characteristics, and Patient and Standardized Patient Surveys

Table 5⇓ shows the Pearson correlations between the RPAD and the MPCC total score and components. As hypothesized, RPAD correlated with Finding Common Ground, MPCC Component 3. RPAD also correlated with MPCC total and Exploring Disease and Illness, Component 1. RPAD was not correlated with Understanding the Whole Person, Component 2. RPAD was not correlated with physician , age, sex, or years in practice. RPAD was correlated with standardized patient survey findings on HCCQ and with PCAS-Trust. RPAD, treated as a physician style measure, was significantly correlated with patient survey findings, though the correlations were much smaller than those of the more proximal standardized patient surveys. We also found that RPAD was higher in the unambiguous case (6.8, SD = 2.5) than the ambiguous case (5.7, SD = 2.3) (t = 3.19, P = .002). We found no difference, however, between the RPAD score for internists (6.4, SD = 2.4) and family physicians (6.2, SD = 2.5) (t = 0.59, P = .55).

View this table:
  • View inline
  • View popup
Table 5.

Correlation of RPAD Score With Self-Report Measures

Regression of RPAD on Patient Surveys and Standardized Patient Surveys

We conducted multilevel regression analyses examining the regression of patient survey perception measures on the RPAD and MPCC components. The optimal models for all 4 patient perception measures, based on Akaike’s and Bayes information criteria and physician variance component reduction,27 were the models including RPAD and MPCC Component 1 and Component 2, but not Component 3 (Supplemental Table 2, available online only at http://www.annfammed.org/cgi/content/full/3/5/436/DC1).

We conducted a similar series of regression analyses of the standardized patient survey measures on the RPAD and MPCC components. Again, the optimal models for each of the survey measures were the models including RPAD and MPCC Component 1 and Component 2, RPAD but not Component 3 (Supplemental Table 3, available online only at http://www.annfammed.org/cgi/content/full/3/5/436//DC1).

Consistent with the univariate Pearson correlations, the parameter estimates for the standardized patient survey measures were much larger than those for the patient measures in terms of standard deviation units on the scales examined. For the standardized patient measures, a 1 SD difference in participatory decision making was associated with a 30.3% SD difference in HCCQ and a 25.6% SD difference in satisfaction, whereas for the patient perception measures, a 1 SD difference in RPAD was associated with only a 4.8%–6.1% SD difference in measures of patient perceptions of autonomy support, physician knowledge of patient, trust, and satisfaction.

DISCUSSION

We report exploratory data on a new quantitative objective measure of participatory decision making. The RPAD can be coded reliably, correlates with standardized-patient and real-patient measures of constructs related to participatory decision making, and takes only 50 minutes to code 20-minute office visits. Based on the Braddock et al scale and other literature on participatory decision making, the scale items have face validity.28,29 The scale items address behaviors that physicians use to encourage patient participation in decision making. A difference between our scale and the Braddock et al scale is that we set out to capture physician behaviors that might encourage patient participation, whereas the Braddock et al scale focuses on behaviors that should have occurred during informed decision making. Although we developed the measure in conjunction with our use of the MPCC, we think that the RPAD could be used independently of the MPCC.

The use of standardized patients is both a strength and a weakness of the study. We do not know how the RPAD might work with real patients; however, by using standardized patients, we focused on the physician as an agent encouraging participatory decision making rather than on measuring patient participation in decision making. Future studies should examine using RPAD with real patients.

Because there are no reliable measures of participatory decision making, it was challenging to establish construct validity of the scale. The closest we came to evidence of construct validity was the correlation of MPCC Finding Common Ground with the RPAD. It is difficult to determine whether the modest correlation reflects poor reliability of the MPCC Finding Common Ground subscale or that the 2 scales share variance but measure somewhat different constructs.

Interestingly, RPAD correlated with the MPCC Exploring the Disease and Illness Experience subscale. This finding suggests that the RPAD scale is tapping into other communication processes that are important to patient centered care, or that exploring disease and illness experience is a necessary precursor to participatory decision making. The RPAD includes items that measure physicians’ use of active encouragement for patients to express their ideas and thoughts about the treatment plan. Thus, it includes domains that may not be captured using the MPCC Finding Common Ground subscale, which focuses more on patient question asking, but does not address whether the physician actively encouraged the patients’ participation.

RPAD significantly contributed to the model explaining variance in the degree to which the standardized patients believed that their autonomy was supported by physicians, lending convergent validity. Because no similar relationship was found for MPCC Finding Common Ground subscale, the RPAD may capture the construct of patient-perceived participatory decision making at least as well as other available objective instruments. Not surprisingly, RPAD did not account for as much variance in patient surveys as it did with standardized patient surveys. Patients’ tendency to accommodate to their physician’s communication style may have caused them to judge their physicians’ less critically than standardized patients did, thus muting the association between communication style and patient perceptions of their physicians. In addition, the standardized patients were reporting their perception of the same encounter that was coded using the RPAD, whereas the patients were reporting their perceptions about their ongoing relationship with the physician. Finally, patients’ perceptions were correlated with a measure of physician style assessed from physician interaction with standardized patients.

It is possible that correlations with real patients’ perceptions of their physicians would be stronger had the interactions been with the real patients. These preliminary findings suggest that the RPAD offers promise as a reliable, valid, and easy-to-code objective measure of participatory decision making.

Footnotes

  • Conflicts of interest: none reported

  • Funding support: This project was supported by grant No. R01HS10610 from the Agency for Healthcare Research and Quality (Dr. Epstein).

  • Received for publication September 7, 2004.
  • Revision received February 21, 2005.
  • Accepted for publication February 28, 2005.
  • © 2005 Annals of Family Medicine, Inc.

REFERENCES

  1. ↵
    Kaplan SH, Greenfield S, Ware JE. Assessing the effects of physician-patient interactions on the outcomes of chronic disease. [erratum appears in Med Care 1989 Jul;27:679]. Med Care. 1989;27:Suppl-27.
  2. ↵
    Greenfield S, Kaplan S, Ware JE, Jr. Expanding patient involvement in care. Effects on patient outcomes. Ann Internal Med. 1985;102:520–528.
    OpenUrlCrossRefPubMed
  3. ↵
    Braddock CH, III, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics.[comment]. JAMA. 1999;282:2313–2320.
    OpenUrlCrossRefPubMed
  4. ↵
    Guadagnoli E, Ward P. Patient participation in decision-making. Soc Sci Med. 1998;47:329–339.
    OpenUrlCrossRefPubMed
  5. ↵
    Griffin SJ, Kinmonth AL, Veltman MWM, Gillard S, Grant J, Stewart M. Effect on health-related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2:595–608.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    Post DM, Cegala DJ, Miser WF. The other half of the whole: teaching patients to communicate with physicians. Fam Med. 2002;34:344–352.
    OpenUrlPubMed
  7. ↵
    Williams GC, McGregor H, Zeldman A, Freedman ZR, Deci EL, Elder D. Promoting glycemic control through diabetes self-management: evaluating a patient activation intervention. Patient Educ Counseling. 2005;56:28–34.
    OpenUrlCrossRefPubMed
  8. ↵
    Szasz TS, Hollender MH. The basic models of the doctor-patient relationship. Arch Intern Med. 1956;97:585–592.
    OpenUrlCrossRefPubMed
  9. Deber RB. Physicians in health care management: 8. The patient-physician partnership: decision making, problem solving and the desire to participate. CMAJ. 1994;151:423–427.
    OpenUrlAbstract
  10. McKinstry B. Paternalism and the doctor-patient relationship in general practice. Brit J Gen Pract. 1992;42:340–342.
    OpenUrl
  11. Neighbour R. Paternalism or autonomy? Practitioner. 1992;236:860–864.
    OpenUrlPubMed
  12. ↵
    Stewart M, Brown JB, Weston WW, McWhinney IR, McWilliam CL, Freeman TR. Patient-Centered Medicine: Transforming the Clinical Method. Thousand Oaks, Calif: Sage Publications; 1995.
  13. ↵
    Emanuel EJ, Emanuel LL. Four models of the physician-patient relationship. JAMA. 1992;267:2221–2226.
    OpenUrlCrossRefPubMed
  14. ↵
    Lazare A, Eisenthal S, Wasserman L. The customer approach to patienthood. Attending to patient requests in a walk-in clinic. Arch Gen Psychiatry. 1975;32:553–558.
    OpenUrlCrossRefPubMed
  15. ↵
    Quill TE. Partnerships in patient care: A contractual approach. Ann Intern Med. 1983;98:228–234.
    OpenUrlCrossRefPubMed
  16. ↵
    Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Arch Intern Med. 1996;156:1414–1420.
    OpenUrlCrossRefPubMed
  17. ↵
    Brown JB, Stewart M, Tessier S. Assessing communication between patients and doctors: a manual for scoring patient-centred communication. Working Paper Series #95-2. London, Ontario: Centre for Studies in Family Medicine and Thames Valley Family Practice Research Unit, 1995.
  18. ↵
    Roter D, Larson S. The Roter interaction analysis system (RIAS): utility and flexibility for analysis of medical interactions. Patient Educ Couns. 2002;46:243–251.
    OpenUrlCrossRefPubMed
  19. ↵
    Roter DL. Patient participation in the patient-provider interaction: the effects of patient question asking on the quality of interaction, satisfaction and compliance. Health Educ Monographs. 1977;5:281–315.
    OpenUrlCrossRefPubMed
  20. ↵
    Braddock CH, Fihn SD, Levinson W, Jonsen AR, Pearlman RA. How doctors and patients discuss routine clinical decisions. Informed decision making in the outpatient setting. [see comments]. J Gen Intern Med. 1997;12:339–345.
    OpenUrlCrossRefPubMed
  21. ↵
    Williams GC, Freedman ZR, Deci EL. Supporting autonomy to motivate patients with diabetes for glucose control. Diabetes Care. 1998;21:1644–1651.
    OpenUrlAbstract/FREE Full Text
  22. ↵
    Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care [see comments]. J Fam Pract. 1998; 1998;47:213–220.
    OpenUrlPubMed
  23. ↵
    Safran DG, Kosinski M, Tarlov AR, et al. The Primary Care Assessment Survey: tests of data quality and measurement performance. Medical Care 1998; 1998;36:728–739.
    OpenUrlCrossRefPubMed
  24. ↵
    Ware JJ, Kosinski M, Keller SD. A 12-Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity. Med Care. 1996;34:220–233.
    OpenUrlCrossRefPubMed
  25. ↵
    Derogatis LR, Lipman RS, Covi L. SCL-90: an outpatient psychiatric rating scale--preliminary report. Psychopharmacol Bull. 1973;9:13–28.
    OpenUrlPubMed
  26. ↵
    Safran DG, Montgomery JE, Chang H, Murphy J, Rogers WH. Switching doctors: predictors of voluntary disenrollment from a primary physician’s practice. J Fam Pract. 2001;50:130–136.
    OpenUrlPubMed
  27. ↵
    Snijders TAB, Bosker RJ. Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. London, UK: Sage Publications, 1999.
  28. ↵
    Braddock CH, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics. [see comments]. JAMA. 1999;282:2313–2320.
    OpenUrlCrossRefPubMed
  29. ↵
    Kaplan SH, Greenfield S, Gandek B, Rogers WH, Ware JE, Jr. Characteristics of physicians with participatory decision-making styles [see comments]. Ann Intern Med. 1996; 1996;124:497–504.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

The Annals of Family Medicine: 3 (5)
The Annals of Family Medicine: 3 (5)
Vol. 3, Issue 5
1 Sep 2005
  • Table of Contents
  • Index by author
  • In Brief
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Annals of Family Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Rochester Participatory Decision-Making Scale (RPAD): Reliability and Validity
(Your Name) has sent you a message from Annals of Family Medicine
(Your Name) thought you would like to see the Annals of Family Medicine web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
6 + 7 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Rochester Participatory Decision-Making Scale (RPAD): Reliability and Validity
Cleveland G. Shields, Peter Franks, Kevin Fiscella, Sean Meldrum, Ronald M. Epstein
The Annals of Family Medicine Sep 2005, 3 (5) 436-442; DOI: 10.1370/afm.305

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Get Permissions
Share
Rochester Participatory Decision-Making Scale (RPAD): Reliability and Validity
Cleveland G. Shields, Peter Franks, Kevin Fiscella, Sean Meldrum, Ronald M. Epstein
The Annals of Family Medicine Sep 2005, 3 (5) 436-442; DOI: 10.1370/afm.305
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • METHODS
    • RESULTS
    • DISCUSSION
    • Footnotes
    • REFERENCES
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • Links between evidence-based medicine and shared decision-making in courses for doctors in training: a scoping review
  • RESPOND: a patient-centred programme to prevent secondary falls in older people presenting to the emergency department with a fall--protocol for a mixed methods programme evaluation
  • A systematic review of reliable and valid tools for the measurement of patient participation in healthcare
  • Measuring Patients' Perceptions of Patient-Centered Care: A Systematic Review of Tools for Family Medicine
  • In This Issue: Subtle Clinical Policy
  • Google Scholar

More in this TOC Section

  • Joint Display of Integrated Data Collection for Mixed Methods Research: An Illustration From a Pediatric Oncology Quality Improvement Study
  • Patient-Guided Tours: A Patient-Centered Methodology to Understand Patient Experiences of Health Care
  • Putting Evidence Into Practice: An Update on the US Preventive Services Task Force Methods for Developing Recommendations for Preventive Services
Show more Methodology

Similar Articles

Subjects

  • Methods:
    • Quantitative methods
  • Other topics:
    • Communication / decision making

Content

  • Current Issue
  • Past Issues
  • Early Access
  • Plain-Language Summaries
  • Multimedia
  • Podcast
  • Articles by Type
  • Articles by Subject
  • Supplements
  • Calls for Papers

Info for

  • Authors
  • Reviewers
  • Job Seekers
  • Media

Engage

  • E-mail Alerts
  • e-Letters (Comments)
  • RSS
  • Journal Club
  • Submit a Manuscript
  • Subscribe
  • Family Medicine Careers

About

  • About Us
  • Editorial Board & Staff
  • Sponsoring Organizations
  • Copyrights & Permissions
  • Contact Us
  • eLetter/Comments Policy

© 2025 Annals of Family Medicine