Skip to main content

Main menu

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers

User menu

  • My alerts

Search

  • Advanced search
Annals of Family Medicine
  • My alerts
Annals of Family Medicine

Advanced Search

  • Home
  • Current Issue
  • Content
    • Current Issue
    • Early Access
    • Multimedia
    • Podcast
    • Collections
    • Past Issues
    • Articles by Subject
    • Articles by Type
    • Supplements
    • Plain Language Summaries
    • Calls for Papers
  • Info for
    • Authors
    • Reviewers
    • Job Seekers
    • Media
  • About
    • Annals of Family Medicine
    • Editorial Staff & Boards
    • Sponsoring Organizations
    • Copyrights & Permissions
    • Announcements
  • Engage
    • Engage
    • e-Letters (Comments)
    • Subscribe
    • Podcast
    • E-mail Alerts
    • Journal Club
    • RSS
    • Annals Forum (Archive)
  • Contact
    • Contact Us
  • Careers
  • Follow annalsfm on Twitter
  • Visit annalsfm on Facebook
Research ArticleOriginal Research

Triaging Patients With Artificial Intelligence for Respiratory Symptoms in Primary Care to Improve Patient Outcomes: A Retrospective Diagnostic Accuracy Study

Steindór Ellertsson, Hlynur D. Hlynsson, Hrafn Loftsson and Emil L. Sigur∂sson
The Annals of Family Medicine May 2023, 21 (3) 240-248; DOI: https://doi.org/10.1370/afm.2970
Steindór Ellertsson
1Primary Health Care of the Capital Area, Iceland
MD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hlynur D. Hlynsson
2Department of Computer Science, Reykjavik University, Reykjavík, Iceland
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hrafn Loftsson
2Department of Computer Science, Reykjavik University, Reykjavík, Iceland
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Emil L. Sigur∂sson
1Primary Health Care of the Capital Area, Iceland
3Development Center for Primary Health Care in Iceland, Reykjavík, Iceland
4Department of Family Medicine, University of Iceland, Reykjavík, Iceland
MD, PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: emilsig@hi.is
  • Article
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF
Loading

Abstract

PURPOSE Respiratory symptoms are the most common presenting complaint in primary care. Often these symptoms are self resolving, but they can indicate a severe illness. With increasing physician workload and health care costs, triaging patients before in-person consultations would be helpful, possibly offering low-risk patients other means of communication. The objective of this study was to train a machine learning model to triage patients with respiratory symptoms before visiting a primary care clinic and examine patient outcomes in the context of the triage.

METHODS We trained a machine learning model, using clinical features only available before a medical visit. Clinical text notes were extracted from 1,500 records for patients that received 1 of 7 International Classification of Diseases 10th Revision codes (J00, J10, JII, J15, J20, J44, J45). All primary care clinics in the Reykjavík area of Iceland were included. The model scored patients in 2 extrinsic data sets and divided them into 10 risk groups (higher values having greater risk). We analyzed selected outcomes in each group.

RESULTS Risk groups 1 through 5 consisted of younger patients with lower C-reactive protein values, re-evaluation rates in primary and emergency care, antibiotic prescription rates, chest x-ray (CXR) referrals, and CXRs with signs of pneumonia, compared with groups 6 through 10. Groups 1 through 5 had no CXRs with signs of pneumonia or diagnosis of pneumonia by a physician.

CONCLUSIONS The model triaged patients in line with expected outcomes. The model can reduce the number of CXR referrals by eliminating them in risk groups 1 through 5, thus decreasing clinically insignificant incidentaloma findings without input from clinicians.

Key words:
  • artificial intelligence
  • clinical decision support systems
  • primary care
  • triage
  • respiratory symptoms

INTRODUCTION

Health care costs have steadily increased in recent decades.1 General practitioners face a greater number of patients,2,3 with more comorbidities4 and demands,5 and diagnostic test orders have increased substantially.6 Around 20% of patient visits to general practitioners stem from self-resolving symptoms,7 and up to 72% of patient visits are due to acute respiratory symptoms.8 Overuse and misuse of diagnostic tests is a well-known problem in primary care9,10 that increases incidental findings.11,12 The same applies to antibiotic prescribing,13 especially for respiratory tract infections,14 leading to increased bacterial resistance.15 The causes for clinical resource misapplication are multifactorial, but patient demands, human biases, and time pressure play substantial roles.16,17

Machine learning models (MLMs) are thought to be similar or superior to physicians in multiple clinical tasks.18-27 Patient triage using MLMs is reportedly comparable to triage by physicians.28,29 Research in tertiary settings showed MLMs to be superior to physicians at estimating patient risk when ordering diagnostic tests.30 Clinical guidelines and scoring systems can standardize diagnosis and treatment and improve the quality of care while reducing costs,31-34 but remain underused.35-37 Guideline applicability, useability, and time scarcity are cited as reasons why.38,39

Structured triage with standardized questionnaires is likely safer than unstructured triage.40 Assistance from a clinical decision support system increases triage quality.41 By design, MLMs use standardized inputs, making them a good fit for integration into a clinical decision support system, and such systems have been shown to reduce health care costs by 14%.42 Triaging patients at the time of appointment scheduling is even more important since the COVID-19 pandemic. Methods to identify patients well suited to virtual consultations are needed as they now make up 13% to 17% of consultations across all specialties.43

Clinical text notes (CTNs) are a written record of a physician’s interpretation of the patient’s symptoms and signs, reasons for clinical decisions made during the consultation, and actions taken (eg, imaging referrals, prescriptions written). The objective of this study was to train a patient triage MLM on symptoms and signs (clinical features) of patients with respiratory symptoms, using only features the patient could be asked about in order to mimic previsit triage. We extracted the clinical features from CTNs.

This MLM, which we refer to as a respiratory symptom triage model (RSTM), divides patients into 10 risk groups (with increasing risk from groups 1 to 10) based on a score. We validated the RSTM by examining patient outcomes, stratified by risk group, on intrinsic data, and in 2 separate extrinsic (unseen) data sets. Evaluating of MLM performance in a medical context is complex, and knowing which benchmarks to use is often unclear. Many reports benchmark MLMs against physicians’ diagnoses which are affected by human biases and errors.44 Benchmarking the RSTM against multiple patient outcomes likely serves as a better performance metric, and, to our knowledge, no reports have examined MLM triage performance in this way.

METHODS

In this retrospective diagnostic accuracy study, we obtained 44,007 medical records of 23,819 patients from a medical database common to all primary clinics in the Reykjavík area of Iceland. Each record contained a CTN with diagnostic referrals and results, diagnoses, and prescriptions written.

The selection criteria were patients over the age of 18 years who were diagnosed by a physician from January 1, 2016 through December 31, 2018 with 1 of 7 International Classification of Diseases 10th Revision (ICD-10) codes: J00 (common cold), J10 and J11 (influenza), J15 (bacterial pneumonia), J20 (acute bronchitis), J44 (chronic obstructive pulmonary disease [COPD]), and J45 (asthma), including subgroups. We removed CTNs containing less than 250 characters, resulting in 17,177 CTNs included in this study.

In our previous work, we trained a deep neural network to extract clinical features from CTNs,45 which we call the clinical feature extraction model. We randomly selected 7,000 CTNs as input to the clinical feature extraction model and discarded CTNs with less than 8 clinical features, increasing the odds of having enough clinical features in each for the RSTM. The clinical feature extraction model also extracted presenting complaints which we used to limit inclusion to only patients presenting with acute or subacute respiratory symptoms. The complete list of presenting complaints is in Supplemental Table 1. We removed 95 CTNs from follow-up consultations and 223 CTNs with multiple topics to include only CTNs in which patients presented with a new respiratory complaint as a single complaint. Thus, for patients diagnosed with COPD and asthma, only cases of exacerbation were included.

Applying these filters reduced the set of 7,000 CTNs to 2,942. Of those, 2,000 CTNs were randomly selected and manually annotated by a single physician. As annotating CTNs is costly, the final number of 2,000 CTNs was limited by funding. We split the resulting data set randomly into training (75%, 1,500 CTNs) and test (25%, 500 CTNs) sets. We also annotated an additional 664 CTNs with influenza ICD-10 codes (J10, J11) as a second test set to examine the generalizability of the RSTM further because there were no influenza patients in the training data set.

Subsequently, we trained the RSTM on features that patients can be asked about and measure themselves, imitating a setting where triage takes place before a medical consultation. We chose the input features that a web-based triage system could obtain directly from a patient without other human assistance to ensure the model fits into a clinical workflow. The training objective was to predict the likelihood of a patient having a lower respiratory tract infection. We considered all diagnoses except J00 (common cold) to be a lower respiratory tract infection.

The RSTM had a single output: a score between 0 and 1, where patient scores approaching 1 have an increased probability of a lower respiratory tract infection diagnosis. We performed 25 repeats of a 4-fold stratified nested cross validation for hyperparameter search and intrinsic validation. We then trained the RSTM on the training data set with optimized hyperparameters before splitting patients in the test sets into 10 risk groups based on the score they received. The risk score interval for each group was 0.1, and we refer to groups 1 through 5 as the low-risk groups and 6 through 10 as the high-risk groups.

Annotation

The annotation method was inspired by researchers who applied similar annotations on medical text,19 which assigned binary and numerical values to clinical features, representing the presence or absence of signs and symptoms in the CTNs, as they were described in text. They constituted the patient’s health state as described by a physician during the medical consultation when the CTN was written. A detailed description of the annotation process is in the Supplemental Appendix. We gave missing binary features the value of 0. Missing value features were replaced by randomly sampling the normal distribution for that feature to reduce the odds of the model simply learning where features are missing, which would be more likely for a patient with less severe disease.

Model Architecture, Hyperparameter Optimization, and Training

The classifier we used was a type of logistic regression with Least Absolute Shrinkage and Selection Operator penalty. We used Shapley Additive Explanation46 values to extract the 50 most impactful clinical features to use as input features into the RSTM to reduce the risk of a spurious correlation between the input and output data. A full list of the model clinical features can be found in Supplemental Table 2. We performed 25 repeats of a 4-fold stratified nested cross validation with grid search on the training set, to optimize the hyperparameters of the RSTM. Only class weight and the penalty (C parameter) were optimized, resulting in use of a balanced parameter for class weight and a C value of 0.1 during training. Then we trained the RSTM on the training data set before running inference on the patients in the test sets.

Outcomes and Statistical Analysis

For each risk group, we examined the following outcomes: mean C-reactive protein (CRP) value, ICD-10 code distribution, the proportion of patients re-evaluated in primary care and emergency departments within 7 days, the proportion of patients referred for a CXR, CXRs with signs of pneumonia and incidentalomas, and proportion of patients receiving antibiotic prescriptions. C-reactive protein values were only available if the physician deemed it necessary and were extracted from the CTN since rapid-CRP test results are saved only in the CTN not in a structural database in Iceland. Referrals for CXRs and results are linked to a CTN and the textual answer from the radiologist was manually annotated in the same manner as the CTNs. Except for incidentalomas and ICD-10 codes, a positive or a higher outcome value indicated more severe disease for a given patient. Notes about consolidations, infiltrations, and pneumonia-like signs in the CXR’s text description were considered positive signs of pneumonia. All data sources were from the patients’ electronic health records. The 95% CIs were calculated by sorting the values for each outcome and calculating the 2.5% and 97.5% percentiles. We used a 2-sided Fisher’s exact test to calculate P values for binary variables and a 2-sided Mann-Whitney U test for continuous variables. We considered P <.05 to be significant. We implemented data analysis in Python (version 3.8) and trained and validated the RSTM with the scikit-learn library (0.22.1).47

RESULTS

A total of 2,000 CTNs from 1,915 patients were included in the final data set. There were 26,971 annotations, for an average of 13.5 annotations per CTN. The flowchart of CTN selection of the first test set is shown in Figure 1. In the second test set, 664 CTNs from 652 patients were included. Table 1 shows the demographics for each data set, ICD-10 code, and mean outcome distribution. The baseline outcome rates are similar to those reported by others.48-51 Patients with pneumonia on CXRs received antibiotic prescriptions in 46% of cases. All incidentalomas were of nodule subtype and none had clinical significance. Table 2 compares the outcome rates in the low-risk and high-risk groups in the test sets with calculated P values. There was a significant difference between the groups in CRP values and antibiotic treatment in test set 1 and only in antibiotic treatment in test set 2. No evaluations in the emergency department resulted in a pneumonia diagnosis, and 83% received the same ICD-10 code as they received in the initial consultation. No primary care re-evaluations resulted in a pneumonia diagnosis, and 80% received the same ICD-10 code they initially received.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Study flowchart for clinical text note selection.

CFEM = clinical feature extraction model; CTN = clinical text note; PC = primary care.

View this table:
  • View inline
  • View popup
Table 1.

Demographics, ICD-10 Code Distributions, and Outcomes in the Data Sets

View this table:
  • View inline
  • View popup
Table 2.

Comparison of Outcome Rates in the Test Sets Between Low-Risk and High-Risk Groups

Outcome distributions stratified by risk group are shown in Figure 2 (training set), Figure 3 (set 1), and Figure 4 (set 2). The low-risk groups in the training set (Figure 2) contain no positive CXRs, 52% of the incidentalomas, and 9% of CXR referrals. In the first test set, the low-risk groups included one-third of the patients who were younger, and had lower CRP values, antibiotic prescription rates, re-evaluation rates, no positive CXRs, and 19% of CXR referrals. In the second test set, 45% of patients and 35% of CXR referrals were in low-risk groups, that had no CXRs with signs of pneumonia and the single incidentaloma found. The outcome trends in Figures 2, 3, and 4 show rising outcome rates with higher outcome groups for all outcomes, except for re-evaluation in primary care and CRP values in the second test set.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

The outcome distribution in the cross-validated data set.

CXR = chest x-ray; CRP = C-reactive protein; ICD-10 = International Classification of Diseases, 10th Revision; J00 = common cold; J15 =bacterial pneumonia; J20 = acute bronchitis; J44 = chronic obstructive pulmonary disease; J45 = asthma.

Notes: (A-E) bars represent 95% CIs. (F) shaded area represents 95% CIs.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

The outcome distribution in test set 1.

CXR = chest x-ray; CRP = C-reactive protein; dL= deciliter; ED = emergency department; ICD-10 = International Classification of Diseases, 10th revision; J00 = common cold; J15 = bacterial pneumonia; J20 = acute bronchitis; J44 = chronic obstructive pulmonary disease; J45 = asthma; mg = milligram; PC = primary care.

Notes: (B) bars represent 95% CIs. (F) shaded area represents 95% CIs.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

The outcome distribution in test set 2.

CXR = chest x-ray; CRP = C-reactive protein; dL= deciliter; ICD-10 = International Classification of Diseases, 10th revision; mg = milligram; PC = primary care.

Notes: (B) bars represent 95% CIs. (F) shaded area represents 95% CIs. ICD-10 code distribution in test set 2 was not examined for these symptomatically similar patients.

DISCUSSION

In this large retrospective study, we show, for the first time, the results of patient triage by MLMs in primary care, using only data available before a medical consultation, in the context of patient outcomes. The RSTM performs the triage such that patients in high-risk groups have more severe outcomes than those in lower-risk groups. Importantly, no patient in the lowest 5 risk groups had a CXR with signs of pneumonia or a pneumonia ICD-10 code. Despite patients in test set 2 coming from a different population than patients in the training data set, the triage shows an outcome distribution pattern similar to that of test set 1, further validating that the RSTM triages pre-consultation patients similarly. The nested cross validation shows an underlying signal across the whole data set, allowing the RSTM to triage the patients aligned with expected outcomes, regardless of how the data set is split and ordered. The outcome distribution is similar in all data sets, indicating a general good model fit to the data. Interestingly, the RSTM is ignorant of ICD-10 code subtypes but scores J15 (bacterial pneumonia) patients at an increasing rate in groups 4 through 10, while J00 (common cold) and J20 (acute bronchitis) decrease proportionally. J44 (COPD) was only found in groups 2 though 8, indicating that the model considers patients with pneumonia (J15) and COPD (J44) most likely for worse outcomes, matching reality.

Findings Compared With Other Studies

We were unable to find similar studies, but multiple studies have attempted to derive a diagnostic rule for pneumonia from the signs and symptoms of patients. All but 1 include features in their rules which make them incomparable to the RSTM. When we compare the scores of the RSTM and the diagnostic rule from the 1 comparable study,52 we see a linear correlation (Supplemental Figure 1). Those authors concluded that using the diagnostic rule in clinical settings would substantially reduce antibiotic use and CXR imaging,52 which coincides with our findings. We also compared the score of the RSTM to the Anthonisen score,53 which recommends antibiotic treatment for exacerbation of COPD if 2 of 3 cardinal symptoms are positive (increased sputum expectoration, increased dyspnea, purulent sputum production). Their score coincides well with the risk prediction of the RSTM (Supplemental Figure 2) and recommends that COPD patients in the low-risk groups should not be treated with antibiotics.

Clinical Implications

If the RSTM shows similar performance in clinical settings, it could be implemented as a web-based tool, potentially triaging patients online before they make an appointment. The triage could potentially identify patients with low risk of lower respiratory tract infection, that could be attended to without the need for face-to-face consultations. The RSTM could eliminate CXR referrals for patients in groups where the probability of them being positive is low or nonexistent, which would remove up to one-third of CXRs and possibly one-half of the incidentalomas without missing a positive CXR. Despite all patients in the low-risk groups receiving diagnoses where the benefit of antibiotics is debatable, antiobiotics were substantially prescribed. Reducing antibiotic prescriptions in the low-risk groups would increase prescription quality. The RSTM score needs no input from clinicians and can be ready when a patient enters the examination room, resulting in an easy-to-use, unambiguous, applicable score with a meaningful effect. Thus, the RTSM can possibly reduce costs for patients, the health care system, and society.

Strengths and Limitations

The strengths of this study include a large data set of patients with 2 distinct test sets. Using multiple patient outcomes, stratified by risk groups, gives more insight into the performance and safety of the triage instead of using only physicians’ diagnoses as benchmarks. The study is subject to limitations and biases of a retrospective methodology, and the findings must be validated prospectively. The CTNs are a written record of the physician’s interpretation of patients’ symptoms and signs and contain human errors and biases, making the RSTM erroneous and biased. Removing CTNs with less than 8 clinical features creates selection bias, likely toward patients with more severe symptoms. Direct data collection from patients would provide more quality training data. There is availability bias in the CRP values and CXR outcomes. Performing annotation with multiple physicians would likely result in more quality annotations.

Footnotes

  • Conflicts of interest: authors report none.

  • Read or post commentaries in response to this article.

  • Funding support: The Scientific fund of the Icelandic College of General Practice funded this research.

  • Ethical approval: The National Bioethics Committee in Iceland authorized this study in November 2020 (reference number VSN-20-198).

  • Supplemental materials

  • Received for publication September 9, 2022.
  • Revision received December 9, 2022.
  • Accepted for publication January 12, 2023.
  • © 2023 Annals of Family Medicine, Inc.

REFERENCES

  1. 1.↵
    1. OECD
    . Health spending. OECD Data. Accessed Mar 21, 2022. https://data.oecd.org/healthres/health-spending.htm
  2. 2.↵
    1. Svedahl ER,
    2. Pape K,
    3. Toch-Marquardt M, et al.
    Increasing workload in Norwegian general practice - a qualitative study. BMC Fam Pract. 2019; 20(1): 68. doi:10.1186/s12875-019-0952-5
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Hobbs FDR,
    2. Bankhead C,
    3. Mukhtar T, et al; National Institute for Health Research School for Primary Care Research
    . Clinical workload in UK primary care: a retrospective analysis of 100 million consultations in England, 2007-14. Lancet. 2016; 387(10035): 2323-2330. doi:10.1016/S0140-6736(16)00620-6
    OpenUrlCrossRefPubMed
  4. 4.↵
    1. NHS
    . Long Term Conditions Compendium of Information: third edition. Published May 30, 2012. Accessed Mar 21, 2022. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/216528/dh_134486.pdf
  5. 5.↵
    1. Deloitte
    . Under pressure: the funding of patient care in general practice. Published Apr 2, 2014. Accessed Mar 21, 2022. https://www.queensroadpartnership.co.uk/mf.ashx?ID=406a083a-144f-457d-b14b-aad537f67fc9
  6. 6.↵
    1. Smith-Bindman R,
    2. Miglioretti DL,
    3. Larson EB.
    Rising use of diagnostic medical imaging in a large integrated health system. Health Aff (Millwood). 2008; 27(6): 1491-1502. doi:10.1377/hlthaff.27.6.1491
    OpenUrlAbstract/FREE Full Text
  7. 7.↵
    1. Sihvonen M,
    2. Kekki P.
    Unnecessary visits to health centres as perceived by the staff. Scand J Prim Health Care. 1990; 8(4): 233-239. doi:10.3109/02813439008994964
    OpenUrlCrossRefPubMed
  8. 8.↵
    1. Renati S,
    2. Linder JA.
    Necessity of office visits for acute respiratory infections in primary care. Fam Pract. 2016; 33(3): 312-317. doi:10.1093/fampra/cmw019
    OpenUrlCrossRefPubMed
  9. 9.↵
    1. Simpson GC,
    2. Forbes K,
    3. Teasdale E,
    4. Tyagi A,
    5. Santosh C.
    Impact of GP direct-access computerised tomography for the investigation of chronic daily headache. Br J Gen Pract. 2010; 60(581): 897-901. doi:10.3399/bjgp10X544069
    OpenUrlAbstract/FREE Full Text
  10. 10.↵
    1. O’Sullivan JW,
    2. Albasri A,
    3. Nicholson BD, et al.
    Overtesting and undertesting in primary care: a systematic review and meta-analysis. BMJ Open. 2018; 8(2): e018557. doi:10.1136/bmjopen-2017-018557
    OpenUrlAbstract/FREE Full Text
  11. 11.↵
    1. Anjum O,
    2. Bleeker H,
    3. Ohle R.
    Computed tomography for suspected pulmonary embolism results in a large number of non-significant incidental findings and follow-up investigations. Emerg Radiol. 2019; 26(1): 29-35. doi:10.1007/s10140-018-1641-8
    OpenUrlCrossRef
  12. 12.↵
    1. Waterbrook AL,
    2. Manning MA,
    3. Dalen JE.
    The Significance of Incidental Findings on Computed Tomography of the Chest. J Emerg Med. 2018; 55(4): 503-506. doi:10.1016/j.jemermed.2018.06.001
    OpenUrlCrossRef
  13. 13.↵
    1. Hawker JI,
    2. Smith S,
    3. Smith GE, et al.
    Trends in antibiotic prescribing in primary care for clinical syndromes subject to national recommendations to reduce antibiotic resistance, UK 1995-2011: analysis of a large database of primary care consultations. J Antimicrob Chemother. 2014; 69(12): 3423-3430. doi:10.1093/jac/dku291
    OpenUrlCrossRefPubMed
  14. 14.↵
    1. Gulliford MC,
    2. Dregan A,
    3. Moore MV, et al.
    Continued high rates of antibiotic prescribing to adults with respiratory tract infection: survey of 568 UK general practices. BMJ Open. 2014; 4(10): e006245. doi:10.1136/bmjopen-2014-006245
    OpenUrlAbstract/FREE Full Text
  15. 15.↵
    1. Costelloe C,
    2. Metcalfe C,
    3. Lovering A,
    4. Mant D,
    5. Hay AD.
    Effect of antibiotic prescribing in primary care on antimicrobial resistance in individual patients: systematic review and meta-analysis. BMJ. 2010; 340: c2096. doi:10.1136/bmj.c2096
    OpenUrlAbstract/FREE Full Text
  16. 16.↵
    1. The ABIM foundation
    . Choosing wisely. DataBrief: findings from a national survey of physicians. Published 2017. Accessed Mar 21, 2022. https://www.choosingwisely.org/wp-content/uploads/2017/10/Summary-Research-Report-Survey-2017.pdf
  17. 17.↵
    1. Fletcher-Lartey S,
    2. Yee M,
    3. Gaarslev C,
    4. Khan R.
    Why do general practitioners prescribe antibiotics for upper respiratory tract infections to meet patient expectations: a mixed methods study. BMJ Open. 2016; 6(10): e012244. doi:10.1136/bmjopen-2016-012244
    OpenUrlAbstract/FREE Full Text
  18. 18.↵
    1. Ellertsson S,
    2. Loftsson H,
    3. Sigurdsson EL.
    Artificial intelligence in the GPs office: a retrospective study on diagnostic accuracy. Scand J Prim Health Care. 2021; 39(4): 448-458. doi:10.1080/02813432.2021.1973255
    OpenUrlCrossRef
  19. 19.↵
    1. Liang H,
    2. Tsui BY,
    3. Ni H, et al.
    Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nat Med. 2019; 25(3): 433-438. doi:10.1038/s41591-018-0335-9
    OpenUrlCrossRefPubMed
  20. 20.
    1. Ribeiro AH,
    2. Ribeiro MH,
    3. Paixão GMM, et al.
    Automatic diagnosis of the 12-lead ECG using a deep neural network. Nat Commun. 2020; 11(1): 1760. doi:10.1038/s41467-020-15432-4
    OpenUrlCrossRef
  21. 21.
    1. Hannun AY,
    2. Rajpurkar P,
    3. Haghpanahi M, et al.
    Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019; 25(1): 65-69. doi:10.1038/s41591-018-0268-3
    OpenUrlCrossRefPubMed
  22. 22.
    1. Gulshan V,
    2. Peng L,
    3. Coram M, et al.
    Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016; 316(22): 2402-2410. doi:10.1001/jama.2016.17216
    OpenUrlCrossRefPubMed
  23. 23.
    1. Kermany DS,
    2. Goldbaum M,
    3. Cai W, et al.
    Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018; 172(5): 1122-1131.e9. doi:10.1016/j.cell.2018.02.010
    OpenUrlCrossRefPubMed
  24. 24.
    1. Tomita N,
    2. Cheung YY,
    3. Hassanpour S.
    Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans. Comput Biol Med. 2018; 98: 8-15. doi:10.1016/j.compbiomed.2018.05.011
    OpenUrlCrossRefPubMed
  25. 25.
    1. Rajpurkar P,
    2. Irvin J,
    3. Zhu K, et al.
    CheXNet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv: 171105225 [cs, stat]. Published online Dec 25, 2017. Accessed Febr 15, 2022. https://arxiv.org/abs/1711.05225
  26. 26.
    1. Teramoto A,
    2. Fujita H,
    3. Yamamuro O,
    4. Tamaki T.
    Automated detection of pulmonary nodules in PET/CT images: ensemble false-positive reduction using a convolutional neural network technique. Med Phys. 2016; 43(6): 2821-2827. doi:10.1118/1.4948498
    OpenUrlCrossRef
  27. 27.↵
    1. Ardila D,
    2. Kiraly AP,
    3. Bharadwaj S, et al.
    End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med. 2019; 25(6): 954-961. doi:10.1038/s41591-019-0447-x
    OpenUrlCrossRefPubMed
  28. 28.↵
    1. Kim CK,
    2. Choi JW,
    3. Jiao Z, et al.
    An automated COVID-19 triage pipeline using artificial intelligence based on chest radiographs and clinical data. NPJ Digit Med. 2022; 5(1): 5. doi:10.1038/s41746-021-00546-w
    OpenUrlCrossRef
  29. 29.↵
    1. Baker A,
    2. Perov Y,
    3. Middleton K, et al.
    A comparison of artificial intelligence and human doctors for the purpose of triage and diagnosis. Frontiers in Artificial Intelligence. Published 2020. Accessed Mar 21, 2022. https://www.frontiersin.org/article/10.3389/frai.2020.543405
  30. 30.↵
    1. Mullainathan S,
    2. Obermeyer Z.
    Diagnosing physician error: a machine learning approach to low-value health care. Published Aug 2019. Updated Nov 2021. Accessed Mar 21, 2022. https://click.endnote.com/viewer?doi=10.1093%2Fqje%2Fqjab046&token=WzcyMDUwOSwiMTAuMTA5My9xamUvcWphYjA0NiJd.LTNw5P_WqY_RWsjNaONiygyvKb0
  31. 31.↵
    1. Henry KE,
    2. Hager DN,
    3. Pronovost PJ,
    4. Saria S.
    A targeted real-time early warning score (TREWScore) for septic shock. Sci Transl Med. 2015; 7(299): 299ra122. doi:10.1126/scitranslmed.aab3719
    OpenUrlAbstract/FREE Full Text
  32. 32.
    1. Hall KK,
    2. Shoemaker-Hunt S,
    3. Hoffman L, et al.
    Making Healthcare Safer III: A Critical Analysis of Existing and Emerging Patient Safety Practices. Agency for Healthcare Research and Quality; 2020. Accessed Aug 29, 2022. https://www.ncbi.nlm.nih.gov/books/NBK555525/
  33. 33.
    1. Pestotnik SL,
    2. Classen DC,
    3. Evans RS,
    4. Burke JP.
    Implementing antibiotic practice guidelines through computer-assisted decision support: clinical and financial outcomes. Ann Intern Med. 1996; 124(10): 884-890. doi:10.7326/0003-4819-124-10-199605150-00004
    OpenUrlCrossRefPubMed
  34. 34.↵
    1. Podda M,
    2. Pisanu A,
    3. Sartelli M, et al.
    Diagnosis of acute appendicitis based on clinical scores: is it a myth or reality? Acta Biomed. 2021; 92(4): e2021231. doi:10.23750/abm.v92i4.11666
    OpenUrlCrossRef
  35. 35.↵
    1. Cabana MD,
    2. Rand CS,
    3. Powe NR, et al.
    Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999; 282(15): 1458-1465. doi:10.1001/jama.282.15.1458
    OpenUrlCrossRefPubMed
  36. 36.
    1. Logan GS,
    2. Dawe RE,
    3. Aubrey-Bassler K, et al.
    Are general practitioners referring patients with low back pain for CTs appropriately according to the guidelines: a retrospective review of 3609 medical records in Newfoundland using routinely collected data. BMC Fam Pract. 2020; 21(1): 236. doi:10.1186/s12875-020-01308-5
    OpenUrlCrossRefPubMed
  37. 37.↵
    1. Morgan B,
    2. Mullick S,
    3. Harper WM,
    4. Finlay DB.
    An audit of knee radiographs performed for general practitioners. Br J Radiol. 1997; 70(831): 256-260. doi:10.1259/bjr.70.831.9166050
    OpenUrlAbstract
  38. 38.↵
    1. Carlsen B,
    2. Glenton C,
    3. Pope C.
    Thou shalt versus thou shalt not: a meta-synthesis of GPs’ attitudes to clinical practice guidelines. Br J Gen Pract. 2007; 57(545): 971-978. doi:10.3399/096016407782604820
    OpenUrlAbstract/FREE Full Text
  39. 39.↵
    1. Dambha-Miller H,
    2. Everitt H,
    3. Little P.
    Clinical scores in primary care. Br J Gen Pract. 2020; 70(693): 163-163. doi:10.3399/bjgp20X708941
    OpenUrlFREE Full Text
  40. 40.↵
    1. Khan MNB.
    Telephone consultations in primary care, how to improve their safety, effectiveness and quality. BMJ Open Quality. 2013; 2(1): u202013.w1227. doi:10.1136/bmjquality.u202013.w1227
    OpenUrlCrossRef
  41. 41.↵
    1. Graversen DS,
    2. Christensen MB,
    3. Pedersen AF, et al.
    Safety, efficiency and health-related quality of telephone triage conducted by general practitioners, nurses, or physicians in out-of-hours primary care: a quasi-experimental study using the Assessment of Quality in Telephone Triage (AQTT) to assess audio-recorded telephone calls. BMC Fam Pract. 2020; 21(1): 84. doi:10.1186/s12875-020-01122-z
    OpenUrlCrossRef
  42. 42.↵
    1. Tenhunen H,
    2. Hirvonen P,
    3. Linna M,
    4. Halminen O,
    5. Hörhammer I.
    Intelligent patient flow management system at a primary healthcare center - the effect on service use and costs. Stud Health Technol Inform. 2018; 255: 142-146.
    OpenUrl
  43. 43.↵
    1. McKinsey and Company
    . Telehealth: a quarter-trillion-dollar post-COVID-19 reality? Published Jul 9, 2021. Accessed Aug 22, 2022. https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/telehealth-a-quarter-trillion-dollar-post-covid-19-reality
  44. 44.↵
    1. Balogh EP,
    2. Miller BT,
    3. Ball JR, et al.
    Improving Diagnosis in Health Care. National Academies Press; 2015. Accessed Mar 28, 2022. https://www.ncbi.nlm.nih.gov/books/NBK338594/
  45. 45.↵
    1. Hlynsson HD,
    2. Ellertsson S,
    3. Daðason JF,
    4. Sigurdsson EL,
    5. Loftsson H.
    Semi-self-supervised automated ICD coding. Published May 20, 2022. Accessed May 23, 2022. https://arxiv.org/abs/2205.10088
  46. 46.↵
    1. Lundberg S,
    2. Lee SI.
    A unified approach to interpreting model predictions. Published May 22, 2017. doi:10.48550/arXiv.1705.07874
    OpenUrlCrossRef
  47. 47.↵
    1. Pedregosa F,
    2. Varoquaux G,
    3. Gramfort A, et al.
    Scikit-learn: Machine Learning in Python. J Mach Learn Res. 2011; 12: 2825-2830.
    OpenUrlCrossRefPubMed
  48. 48.↵
    1. Speets AM,
    2. Hoes AW,
    3. van der Graaf Y,
    4. Kalmijn S,
    5. Sachs APE,
    6. Mali WPTM.
    Chest radiography and pneumonia in primary care: diagnostic yield and consequences for patient management. Eur Respir J. 2006; 28(5): 933-938. doi:10.1183/09031936.06.00008306
    OpenUrlAbstract/FREE Full Text
  49. 49.
    1. van Vugt S,
    2. Broekhuizen L,
    3. Zuithoff N, et al; GRACE Project Group
    . Incidental chest radiographic findings in adult patients with acute cough. Ann Fam Med. 2012; 10(6): 510-515. doi:10.1370/afm.1384
    OpenUrlAbstract/FREE Full Text
  50. 50.
    1. Havers FP,
    2. Hicks LA,
    3. Chung JR, et al.
    Outpatient antibiotic prescribing for acute respiratory infections during influenza seasons. JAMA Netw Open. 2018; 1(2): e180243. doi:10.1001/jamanetworkopen.2018.0243
    OpenUrlCrossRef
  51. 51.↵
    1. Wood J,
    2. Butler CC,
    3. Hood K, et al.
    Antibiotic prescribing for adults with acute cough/lower respiratory tract infection: congruence with guidelines. Eur Respir J. 2011; 38(1): 112-118. doi:10.1183/09031936.00145810
    OpenUrlAbstract/FREE Full Text
  52. 52.↵
    1. Diehr P,
    2. Wood RW,
    3. Bushyhead J,
    4. Krueger L,
    5. Wolcott B,
    6. Tompkins RK.
    Prediction of pneumonia in outpatients with acute cough—a statistical approach. J Chronic Dis. 1984; 37(3): 215-225. doi:10.1016/0021-9681(84)90149-8
    OpenUrlCrossRefPubMed
  53. 53.↵
    1. Anthonisen NR,
    2. Manfreda J,
    3. Warren CPW,
    4. Hershfield ES,
    5. Harding GKM,
    6. Nelson NA.
    Antibiotic therapy in exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 1987; 106(2): 196-204. doi:10.7326/0003-4819-106-2-196
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

The Annals of Family Medicine: 21 (3)
The Annals of Family Medicine: 21 (3)
Vol. 21, Issue 3
May/June 2023
  • Table of Contents
  • Index by author
  • Front Matter (PDF)
  • Plain-Language Summaries
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Annals of Family Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Triaging Patients With Artificial Intelligence for Respiratory Symptoms in Primary Care to Improve Patient Outcomes: A Retrospective Diagnostic Accuracy Study
(Your Name) has sent you a message from Annals of Family Medicine
(Your Name) thought you would like to see the Annals of Family Medicine web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
4 + 2 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Triaging Patients With Artificial Intelligence for Respiratory Symptoms in Primary Care to Improve Patient Outcomes: A Retrospective Diagnostic Accuracy Study
Steindór Ellertsson, Hlynur D. Hlynsson, Hrafn Loftsson, Emil L. Sigur∂sson
The Annals of Family Medicine May 2023, 21 (3) 240-248; DOI: 10.1370/afm.2970

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Get Permissions
Share
Triaging Patients With Artificial Intelligence for Respiratory Symptoms in Primary Care to Improve Patient Outcomes: A Retrospective Diagnostic Accuracy Study
Steindór Ellertsson, Hlynur D. Hlynsson, Hrafn Loftsson, Emil L. Sigur∂sson
The Annals of Family Medicine May 2023, 21 (3) 240-248; DOI: 10.1370/afm.2970
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • METHODS
    • RESULTS
    • DISCUSSION
    • Footnotes
    • REFERENCES
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • A comparison of self-triage tools to nurse driven triage in the emergency department
  • Google Scholar

More in this TOC Section

  • Shared Decision Making Among Racially and/or Ethnically Diverse Populations in Primary Care: A Scoping Review of Barriers and Facilitators
  • Convenience or Continuity: When Are Patients Willing to Wait to See Their Own Doctor?
  • Feasibility and Acceptability of the “About Me” Care Card as a Tool for Engaging Older Adults in Conversations About Cognitive Impairment
Show more Original Research

Similar Articles

Subjects

  • Domains of illness & health:
    • Acute illness
  • Methods:
    • Quantitative methods
  • Other research types:
    • Health services
  • Other topics:
    • Health informatics
    • Communication / decision making

Keywords

  • artificial intelligence
  • clinical decision support systems
  • primary care
  • triage
  • respiratory symptoms

Content

  • Current Issue
  • Past Issues
  • Early Access
  • Plain-Language Summaries
  • Multimedia
  • Podcast
  • Articles by Type
  • Articles by Subject
  • Supplements
  • Calls for Papers

Info for

  • Authors
  • Reviewers
  • Job Seekers
  • Media

Engage

  • E-mail Alerts
  • e-Letters (Comments)
  • RSS
  • Journal Club
  • Submit a Manuscript
  • Subscribe
  • Family Medicine Careers

About

  • About Us
  • Editorial Board & Staff
  • Sponsoring Organizations
  • Copyrights & Permissions
  • Contact Us
  • eLetter/Comments Policy

© 2025 Annals of Family Medicine