Table 4

Model Agreement and Specialty Match Using 2016 Data

SpecialtyCountModels Predicting the Same Specialty, %Specialty Match, %aSpecialty Mismatch, %b
Allergy/immunology1,62597.189.67.5
Anesthesiology16,11097.994.33.6
Cardiology11,17096.990.46.5
Dermatology5,49898.896.72.1
Emergency medicine18,66398.387.011.3
Endocrinology2,49795.883.312.5
Gastroenterology5,96097.292.44.8
Hematology-oncology5,57294.984.910.0
Infectious disease2,32891.161.229.9
Nephrology3,69196.786.99.8
Neurology6,21794.583.111.4
Neurosurgery2,00880.648.332.3
Obstetrics and gynecology11,50596.790.66.1
Ophthalmology8,75599.197.91.2
Orthopedic surgery11,09594.686.18.5
Otolaryngology4,26296.889.57.3
Pathology4,83199.397.81.5
Physical medicine and rehabilitation3,43883.241.641.6
Plastic surgery1,79580.742.238.5
Primary care101,49898.392.65.7
Psychiatry14,97497.992.15.8
Pulmonology5,39596.183.212.9
Radiation Oncology1,90395.991.04.9
Radiology11,81699.196.42.7
Rheumatology2,03097.691.75.9
Surgery13,27891.777.714.0
Urology4,57997.394.52.8
Overall282,49397.0c89.4c7.6c
  • For this analysis, we applied the 2014, 2015, and 2016 combined random forests to 2016 Test data, for a total of 3 predictions based on prescribing and procedure data for a single year. Model agreement is defined as all 3 models predicting the same specialty.

  • a All 3 models predicted the self-reported specialty.

  • b All 3 models predicted a specialty that differed from the self-reported category.

  • c Mean across all specialties weighted by number in each specialty.