A review of five cardiology journals found that observer variability of measured variables was infrequently reported

J Clin Epidemiol. 2008 Apr;61(4):394-401. doi: 10.1016/j.jclinepi.2007.05.010. Epub 2007 Oct 15.

Abstract

Objective: To investigate the reporting of the analysis of interobserver and intra-observer variability within clinical research studies from five high-impact cardiology journals published in 2005.

Study design and setting: A cross-sectional study using a combined electronic and manual search identified 180 of 511 eligible articles that reported the assessment of observer variability. Sixty of these were randomly selected for detailed review.

Results: The proportion of the 60 studies reporting interobserver variability, intra-observer variability, or both were 27%, 17%, and 53%, respectively. The reported methodological design of interobserver and intra-observer analyses included a specific protocol in 42% and 33%, identified observers as independent in 31% and 17%, as blinded in 50% and 31%, and identified a prior statistical plan in only 33% and 36%, respectively. Pearson correlation was the most reported measure for continuous variables, and the methods of Bland and Altman were reported in 15% of interobserver and 14% of intra-observer studies, respectively. For categorical variables, a kappa statistic was reported in 82% and 80%, respectively.

Conclusion: Reliability assessment is hampered by unclear and incomplete reporting of interobserver and intra-observer analysis. For continuous variables, inappropriate methods were most frequently reported as being done.

MeSH terms

  • Cardiology / statistics & numerical data*
  • Clinical Trials as Topic / statistics & numerical data
  • Cross-Sectional Studies
  • Humans
  • Information Dissemination / methods*
  • Observer Variation
  • Peer Review, Research*
  • Publishing
  • Research Design