Intended for healthcare professionals

Editorials

Medicine based evidence, a prerequisite for evidence based medicine

BMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7116.1109 (Published 01 November 1997) Cite this as: BMJ 1997;315:1109

Future research methods must find ways of accommodating clinical reality, not ignoring it

  1. Andre Knottnerus (Andre.Knottnerus{at}hagunimaas.nl)a,
  2. Geert Jan Dinant, Associate professora
  1. a Department of General Practice, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands

    Seeking an evidence base for medicine is as old as medicine itself, but in the past decade the concept of evidence based medicine has done a good job in focusing explicit attention on the application of evidence from valid clinical research to clinical practice.1 2 Although current clinical practice is often evidence based,3 4 there is still much to be gained. Important new evidence from research often takes a long time to be implemented in daily care, while established practices persist even if they have been proved to be ineffective or harmful.5 In the meantime, many clinicians struggle to apply the results of studies that do not seem that relevant to their daily practice.

    Evidence based medicine has been defined as the “conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.”2 What can we learn from the limitations of current best evidence for the way that we design future studies?

    We face the problem that criteria for internal and external validity (that is, clinical applicability) may conflict. Clinical studies are usually performed on a homogeneous study population and exclude clinically complex cases for the sake of internal validity. Such selection may not, however, match the type of patients for whom the studied intervention will be considered. Medical practice is often confronted with patients presenting several problems.6 7 Older patients and women are under-represented in clinical trials,8 9 and patients with comorbidity, a common phenomenon at older ages,10 are generally excluded. Evidence from patients selected by referral cannot easily be generalised to patients seen in primary care with less severe or early stage clinical pictures.6 And some important needs for evidence are almost ignored. For instance, while drug trials usually provide evidence about starting drug treatment, doctors are increasingly confronted by patients taking multiple long term medications but have no proper data on evidence based drug cessation.

    Studies on the effectiveness of clinical care may also not easily attain internal validity. An example is the evaluation of the many interventions that cannot be blinded, such as many non-pharmacological procedures. Then, to avoid methodological calamities such as contamination of trial arms, choices must be made between not evaluating at all or looking for alternative design options such as pre-randomisation.11 In studying the effects of complex clinical guidelines the problems are even greater. In addition, the evaluation of diagnostic procedures struggles with difficulties often not dealt with in methodological textbooks. For instance, in validating diagnostic information on low back pain, chronic fatigue syndrome, and benign prostatic hyperplasia unequivocal “gold standard” procedures or even concepts do not exist. And for symptoms and signs such as chronic abdominal pain or a raised erythrocyte sedimentation rate12 invasive gold standard procedures cannot be routinely carried out. Current best evidence may then come from “delayed type cross sectional studies” that harvest the reference standard information from a thorough clinical follow up. Such solutions may not be ideal but the best achievable, closely connected with the reality of clinical care.

    Thus, in seeking internally valid evidence that is externally valid for clinical practice, we need “medicine based” studies that include, not ignore, clinical reality and its inherent difficulties. Since no individual study can include full clinical reality, meta-analyses of various diagnostic and therapeutic studies including various relevant subgroups (such as elderly patients13 or those with comorbidity) are indispensable. To support individual decision making, these meta-analyses should evaluate effect modification between subgroups rather than seeking overall effect measures adjusted for subgroup differences. In (inter)national collaborations such evidence can be prospectively collected, but many methodological problems remain to be resolved, such as cultural differences in symptom perception and therapeutic traditions.

    In reviewing clinical evidence we must be reluctant to adopt too detailed criteria for good and bad science and to freeze criteria for validity. Study methods themselves need to evolve. The randomised controlled trial was developed over half a century and refined in the slipstream of important clinical questions, rather than the reverse. At the same time, much knowledge gained before randomised controlled trials came into being survived into the era of the randomised controlled trial. Given the limited coverage of clinical practice by questions susceptible to randomised controlled trials, quasi-experimental methods that respect the principle of comparability may grow more important—for example, in comparing procedures more or less allocated by chance in daily practice, with negligible confounding by indication. Power requirements for individual studies may become less critical in an era of prospective accumulation of evidence. Databases and practice computer networks will provide for a continuum, from evidence from individual practice to collaborative sampling frames for clinical research.14 In promoting such processes the clinical community can capitalise on the natural interaction between practice (with learning from informal evidence) and clinical research designs (in order to learn formally)(see box)

    Relation between clinical practice and clinical research designs

    View this table:

    Finally, in using strict criteria in reviewing manuscripts for publication, we should worry about risk avoidance by clinical researchers. They might focus their energies on topics where the methodological criteria of reviewers and editors can be most easily met, rather than studying real life clinical problems which present substantial methodological problems. Such “criteria bias” is to be prevented, since medicine based evidence is a prerequisite for evidence based medicine.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.