Intended for healthcare professionals

Information In Practice

Questioning behaviour in general practice: a pragmatic study

BMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7121.1512 (Published 06 December 1997) Cite this as: BMJ 1997;315:1512
  1. A Richard Barrie, continuing medical education tutor for general practice (rbarrie{at}pontilen.demon.co.uk)a,
  2. Alison M Ward, research fellowb
  1. a Pontilen, Rhewl, Ruthin, Denbighshire LL15 1UL
  2. b Department of General Practice, University of Western Australia, Perth, Western Australia
  1. Correspondence to: Dr Barrie
  • Accepted 23 September 1997

Abstract

Objective: To study the extent to which general practitioners' questioning behaviour in routine practice is likely to encourage the adoption of evidence based medicine.

Design: Self recording of questions by doctors during consultations immediately followed by semistructured interview.

Setting: Urban Australian general practice.

Subjects: Random sample of 27 general practitioners followed over a half day of consultations.

Main outcome measures: Rate of recording of clinical questions about patients' care which doctors would like answered; frequency with which doctors found answers to their questions.

Results: Doctors asked a total of 85 clinical questions, at a rate of 2.4 for every 10 patients seen. They found satisfactory answers to 67 (79%) of these questions. Doctors who worked in small practices (of one or two doctors) had a significantly lower rate of questioning than did those in larger practices (1.6 questions per 10 patients v 3.0 patients, P=0.049). No other factors were significantly related to rate of questioning.

Conclusions: These results do not support the view that doctors routinely generate a large number of unanswered clinical questions. It may be necessary to promote questioning behaviour in routine practice if evidence based medicine and other forms of self directed learning are to be successfully introduced.

Key messages

  • Doctors' abilities to formulate questions about their need for medical information in routine practice has been little studied and may be poorly developed

  • We found that general practitioners asked clinical questions at a rate of 2.4 questions every 10 consultations and that doctors in small practices (one or two doctors) asked fewer questions

  • The doctors found answers to most of their questions using easily available sources of information

  • If evidence based medicine and other forms of self directed learning are to make substantial contributions to health care, factors which affect doctors' questioning behaviour need to be identified and ameliorated

Introduction

Evidence based medicine is a style of practice in which doctors manage problems by reference to valid and relevant information. It consists of five steps: formulating answerable questions, tracking down the best evidence to answer them, critically appraising the evidence, applying it in practice, and evaluating performance.1 Much attention has been paid to the last four of these steps, but the first step—asking appropriate questions—has received less attention.

Current understanding of the questioning behaviour of doctors comes from research into information needs.2 3 4 5 6 7 8 9 Most studies of information needs have identified a substantial excess of unanswered questions.2 3 8 However, information needs differ importantly from questioning behaviour. Information needs may be recognised or unrecognised and are independent of the doctor's behaviour—it is a normative concept. Questioning behaviour, however, is an empirical one: it is the process by which a doctor recognises the need for information. Studies focussing on information needs may therefore give an inaccurate picture of questioning behaviour.

The main aim of our study was to test the hypothesis that in routine practice doctors generate a large number of questions, many of which go unanswered. We also examined the possible influence of demographic factors on the questioning behaviour of general practitioners.

Methods

Subjects

The study population was a random sample of 35 of the 128 general practitioners in the Central Coastal Division of General Practice in Perth, Western Australia. We excluded general practitioners with teaching or research contracts at the University of Western Australia because we thought that their behaviour might differ from that of non-academic doctors. Of the 35 doctors selected, two had left the area, leaving an available population of 33, and 27 (82%) of these participated fully in the study.

Study design

The doctors were initially contacted by telephone. Those who agreed to participate were sent a letter asking them to “make a note through one clinical session of any questions which arise regarding the clinical care of each patient, to which you would like an answer” and a form on which to record their questions. The form was piloted with six members of the University Department of General Practice, which resulted in minor changes in wording. The doctors were reminded of their agreement to participate just before the designated session.

After the clinical session they were interviewed by ARB using a semistructured interview. They were asked which questions they had been able to answer during the session and what sources they had used. Questions that remained unanswered at the end of the session were followed up by telephone a week later to see if answers had been obtained.

The doctors were asked which sources of information they most commonly used in routine practice, and were provided with a prompt list of 10 possible sources, although responses were not confined to these. We later condensed these 10 categories to six when it became apparent that the doctors used some of the categories interchangeably. Doctors twice cited the patient as a source of information, and four times they cited themselves, answering their own questions by recall of information or a trial of management. We included these answers in the “Doctor or patient” category.

We obtained demographic information to investigate whether questioning behaviour varied with doctors' characteristics, size of practice, or workload.

We classified all questions asked into four categories: clinical, organisational, patient data, and ethical dilemmas—corresponding closely to Gorman's classification of medical information.9 Organisational questions concerned local information such as the address of a specialist. Dilemmas included, for example, how best to respond to a request for medical examination as part of an employee's apparently dishonest claim for compensation for occupational illness. We excluded the last three categories—organisational, patient data, and ethical dilemmas—from further analysis because they are unlikely to act as a starting point for practising evidence based medicine.

Statistical analysis

To examine whether certain characteristics were associated with higher or lower rates of questioning we divided the subjects into a number of dichotomous groups (see Table 1). We analysed the data using spss for Windows, version 6.1.3, and used Student's t tests to examine differences.

Table 1

Characteristics of 27 general practitioners, analysed with respect to their average rates of recording of clinical questions during atient consultations (n=85)

View this table:

Results

Table 1 shows the participating doctors' characteristics. Their average time since qualification was 19.2 years (range 12–38 years); the average number of doctors in their practices was 3.0; 44% of them were women; 44% were part time; and 63% were graduates of the University of Western Australia, the rest being from other schools in Australia (4), the United Kingdom (4), the Republic of Ireland (1), and Sri Lanka (1).

During the study, the doctors saw 376 patients over 95.4 hours of consulting, at an average rate of 16.8 minutes per patient. A total of 119 questions were recorded: 85 were clinical, 28 were organisational, 4 were patient data questions, and 2 were ethical dilemmas. This gives an overall rate of 3.2 questions per 10 consultations. The 85 clinical questions were asked at an average rate of 2.4 every 10 consultations, and 52 (61%) were answered during the consultation. Of the 33 that were not, 15 (18%) had been satisfactorily answered at follow up a week later, leaving 18 (21%) unanswered. Of these unanswered questions, 3 had been pursued unsuccessfully and 15 had not been pursued at all. This shows that 82% of the clinical questions were followed up, 79% successfully. Doctors in small practices (one or two doctors) asked significantly fewer questions per consultation than those in larger practices (P<0.05), and no other significant differences were found (Table 1).

Table 2 shows the information sources the doctors used to answer their questions. “Desk top” references and human sources were used to answer 90% of questions answered. Textbooks and journals were little used. The three information sources that the doctors said they used most often were similar to the sources they actually used in the study, with the exception that they said they used general practitioner colleagues more than they actually did. In the interviews after the consulting sessions, it became evident that most of the doctors did not feel that they had a major need for new information sources. If information was really needed it was obtainable. When asked what information sources they would like but currently do not have, six (22%) doctors stated that they wanted no new sources. Of the 11 who mentioned a computer, most did not seem to have a precise idea of how it might be helpful.

Table 2

Sources of information used by 27 general practitioners to pursue answers to clinical questions during and after patients'consultations, and sources stated as most often used (15 of the 85 questions raised were not pursued by doctors)

View this table:

Discussion

Questioning behaviour

In this study general practitioners recorded clinical questions at a rate of 2.4 questions every 10 consultations. This is substantially lower than the rates of questioning reported in previous studies, most of which have been concerned primarily with doctors' information needs.2 3 5 6 In our study the doctors recorded questions, as they arose, on a sheet of paper. There was no face to face contact with the observer until after the questions had been identified. Our intention was to influence routine behaviour as little as possible.

Previous studies have used various methods to identify questions, often involving the interrogation of the doctor by the observer. Thus Covell and Gorman interviewed doctors after each consultation,2 3 which may have acted as a stimulus to question forming. Timpka reviewed videotaped consultations with the doctor concerned,5 and Forsythe used ethnographic observation of ward rounds in a teaching hospital to record all verbal and non-verbal questioning.6 The rates of questions per 10 patients seen in these studies were 6.6,2 5.7,3 18.5,5 and 57.7.6 Ely's study of family doctors, which had a far lower rate of questioning (0.7 per 10 patients seen), counted a question only when the doctor was observed seeking information.4

Questioning behaviour probably varies with the clinical setting, but the rate of questioning identified in any one study may also vary substantially depending on the method used to identify questions. Smith recently reviewed this literature and concluded that “when doctors see patients they usually generate at least one question.”10 While it is clear that they could, and perhaps should, be generating questions at this rate, our study suggests that they do not actually do so.

If asking questions is a part of good practice it is important to identify factors that may enhance or inhibit questioning behaviour. We looked at a number of factors. Working in a small practice was associated with a significantly lower rate compared with practising in a larger one. This is consistent with the findings of Ely's study of rural doctors4 and possibly supports concerns about doctors practising in isolation.

Answering of questions

The second part of our hypothesis concerned the success with which doctors are able to answer their questions. In this study the doctors had obtained an answer to most (61%) of their clinical questions by the end of the consulting session. A further 18% were answered within a week of the consultation. In all, 79% were answered by the end of the study. This is similar to Ely's findings of 88%.4 By contrast, Covell found that only 30% of questions were answered at the time of the consultation, although the doctors expected to find an answer to more than half of their questions later.2 Gorman found that only 30% of questions were pursued at all.3 Smith concluded that “most of the questions generated in consultations go unanswered.”10 The excess of unanswered questions in other studies may be partly due to the stimulation of questions by the study, which doctors were consequently less motivated to seek an answer.2

Study limitations

Our study was limited by the small number of doctors who participated (27), and further research is needed to confirm these findings. We have not solved the problem of how to identify questions without either stimulating or missing them.9 Our reliance on the doctors to write down questions unaided may have been less reliable than a face to face approach. The method is also likely to function as an educational intervention, making repeat testing problematic. In this study we did not attempt repeat testing, but, were the method to be used to evaluate an intervention, a good control group would be essential.

Use of information sources

Australian cities have a relatively high provision of both general practitioners and specialists compared with Britain. General practitioners do not have fixed lists of patients and so have a limited “gatekeeping” role. These factors may partly account for the frequent use of specialists for information. The sources of information used to answer questions follow the pattern described in other studies.2 3 4 7 8 11 12 13 “Desk top” references and human sources were used far more than textbooks, journals, and electronic sources. This is consistent with the belief that accessibility is a major factor in determining doctors' choice of information source.12 13 14 It may also reflect doctors' need for what Forsythe described as “informal information.”6 This concerns “how to apply the rules and about exceptions to them” and requires judgment, which human sources are most likely to offer.

Evidence based medicine advocates formal appraisal of the validity and relevance of evidence.1 Adopting evidence based medicine therefore implies a change towards sources of information whose validity can be formally assessed. Doctors may be reluctant to make this change if they value the human judgment and accessibility of their current sources more highly than the more transparent validity of alternatives.13 14

Factors that might motivate doctors to change the sources of information they use include an excess of unanswered questions and a dissatisfaction with their current sources. We found little evidence of either. Rather, the picture this study presents is of a stable system in which doctors find answers to most of their clinical questions and seem reasonably happy with the sources of information they currently use. Evidence based medicine may therefore make slow progress until doctors become more questioning in their routine practice. The skill of asking the right questions deserves as much attention as the skills of information searching, critical appraisal, and audit have received.

Acknowledgments

We thank the general practitioners of the Perth Central Coastal Division for their willingness to participate in this study. We also thank Max Kamien, Jim Dickinson, and Frank Mansfield of the Department of General Practice, University of Western Australia, and Ian Russell, Clare Wilkinson, and other members of the North Wales General Practice Research Club for their help and support.

Funding: Extended study leave payments from the NHS; Glaxo Medical Fellowship; Travel Grant from Department of Postgraduate Studies, University of Wales College of Medicine; Lilly Pharmaceuticals; and RCGP Travel Scholarship.

Conflict of interest: None.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.