Skip to main content
  • Scoping Review
  • Open access
  • Published:

Crowdsourcing in health and medical research: a systematic review

Abstract

Background

Crowdsourcing is used increasingly in health and medical research. Crowdsourcing is the process of aggregating crowd wisdom to solve a problem. The purpose of this systematic review is to summarize quantitative evidence on crowdsourcing to improve health.

Methods

We followed Cochrane systematic review guidance and systematically searched seven databases up to September 4th 2019. Studies were included if they reported on crowdsourcing and related to health or medicine. Studies were excluded if recruitment was the only use of crowdsourcing. We determined the level of evidence associated with review findings using the GRADE approach.

Results

We screened 3508 citations, accessed 362 articles, and included 188 studies. Ninety-six studies examined effectiveness, 127 examined feasibility, and 37 examined cost. The most common purposes were to evaluate surgical skills (17 studies), to create sexual health messages (seven studies), and to provide layperson cardio-pulmonary resuscitation (CPR) out-of-hospital (six studies). Seventeen observational studies used crowdsourcing to evaluate surgical skills, finding that crowdsourcing evaluation was as effective as expert evaluation (low quality). Four studies used a challenge contest to solicit human immunodeficiency virus (HIV) testing promotion materials and increase HIV testing rates (moderate quality), and two of the four studies found this approach saved money. Three studies suggested that an interactive technology system increased rates of layperson initiated CPR out-of-hospital (moderate quality). However, studies analyzing crowdsourcing to evaluate surgical skills and layperson-initiated CPR were only from high-income countries. Five studies examined crowdsourcing to inform artificial intelligence projects, most often related to annotation of medical data. Crowdsourcing was evaluated using different outcomes, limiting the extent to which studies could be pooled.

Conclusions

Crowdsourcing has been used to improve health in many settings. Although crowdsourcing is effective at improving behavioral outcomes, more research is needed to understand effects on clinical outcomes and costs. More research is needed on crowdsourcing as a tool to develop artificial intelligence systems in medicine.

Trial registration

PROSPERO: CRD42017052835. December 27, 2016.

Background

Conventional, expert-driven solutions to medical problems often fail. Innovative approaches such as crowdsourcing may provide a useful community-based method to improve medical services. Crowdsourcing is the process of aggregating crowd wisdom in order to solve a problem [1]. This involves a group solving a problem and then sharing the solution. For example, the initiation of out-of-hospital cardiopulmonary resuscitation (CPR) is often delayed, leading to considerable morbidity and mortality. To address this problem, several teams organized a crowdsourced solution — [2,3,4,5,6,7] training lay people to administer out-of-hospital CPR. When emergency medical services received a call, they sent a text message to proximate laypeople who then provided CPR. This system has been formally evaluated in several studies [3, 4].

Crowdsourcing approaches are increasingly used in public health and medicine [8, 9]. Examples include engaging youth in developing HIV services [10], designing a patient-centered mammography report [11], and enhancing cancer research [12]. Some crowdsourcing approaches focus on the process of mass community engagement, obtaining creative input from many individuals [13, 14]. Other work has focused on the collective input of participants to generate a single, high-quality output such as clinical algorithms [15,16,17,18]. The crowd in crowdsourcing may be members of the general public [19] or individuals with specific clinical expertise [20]. Recognizing the growing importance of crowdsourcing, the United Nations International Children’s Emergency Fund (UNICEF)/ The United Nations Development Programme (UNDP)/World Bank/ The World Health Organisation (WHO) Special Programme for Research and Training in Tropical Diseases (TDR) published a practical guide on crowdsourcing in health and health research [21].

Despite the growth of crowdsourcing in medical settings, few systematic reviews have focused on evaluating crowdsourcing research in medicine [18, 22]. To date, existing reviews have been general [22], have largely ignored crowdsourcing in medicine [9, 18], and have not incorporated the most recent literature [9, 22]. A systematic analysis of the expanding medical literature on crowdsourcing is needed to understand optimal methods. The purpose of this systematic review is to summarize quantitative evidence on crowdsourcing to improve health.

Methods

Search strategy

Based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA, http://www.prisma-statement.org/) checklist and Cochrane guidance, we searched the following seven databases: MEDLINE (via PubMed), Embase, CINAHL, Web of Science, PsycINFO, Cochrane, and ABI/Inform [23, 24]. The search algorithm included elements related to crowdsourcing and to health (Additional file 1: Tables S1–S7). Databases were initially searched on December 7, 2016 and updated on September 4th, 2019. Bibliographies of included articles were also hand searched to identify additional relevant studies.

Inclusion criteria were defined a priori in a protocol registered on PROSPERO, an international prospective register of systematic reviews (CRD42017052835: https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=52835). Articles were included if they were peer-reviewed, reported on crowdsourcing, and were directly related to health. Studies had to report quantitative data on behavioral outcomes, clinical outcomes, feasibility, or cost. We included peer-reviewed research studies described in abstracts if associated original research manuscripts were not included. Exclusion criteria included: failure to provide sufficient detail of methods, use of crowdsourcing only for participant recruitment, qualitative study, non-English study, or non-empirical study. Studies using crowdsourcing to conduct systematic reviews were not included.

Study selection

After duplicates were removed, screening proceeded in two stages (Fig. 1). First, one individual reviewed the abstract and title of each article according to the criteria mentioned above. A full text review was then conducted with two to four individuals independently evaluating each article. Disagreements on whether to include a full text article were resolved by the senior author. Screening and data extraction occurred once for each selected study.

Fig. 1
figure 1

Overview of study selection data abstraction

The following fields underwent dual extraction: citation information (first author, study year, PMID), study setting (nation, city), target health focus/condition, study design, purpose, number of contributions, and study findings. We collected data about effectiveness (focusing on behavioral and clinical outcomes), feasibility, and cost. Effectiveness data included studies that evaluated some health outcome. Feasibility studies examine the feasibility of implementing a crowdsourcing approach in a health context. Cost analysis data provided economic or financial costs associated with the crowdsourcing intervention. We pooled applicable data using meta-analysis if studies used a similar intervention and reported similar metrics. We used random effects models and analysis was undertaken using RevMan 5. Study heterogeneity was assessed by calculating I-squared values. We assessed for small sample size effects using funnel plots if there were more than ten studies.

GRADE evidence profile

For each study, we examined the risk of bias tables, study limitations, consistency, precision, directness, and other factors described in the supplementary tables. Review findings were assessed as high, moderate, low, or very low, reflecting certainty in the estimates. We used the GRADE approach to assess the certainty of the summary finding. The GRADE evidence profile was compiled separately for observational studies and randomized controlled trials (RCTs) for surgical skills, sexual health messages, and out-of-hospital CPR.

We used the Cochrane Collaboration’s tool to assess risk of bias in RCT studies [25]. We used a separate tool to assess the risk of bias of observational studies [26]. Selection bias (development and application of eligibility criteria, controlled for confounding), detection bias (measurement of exposure and outcome), and attrition bias (follow-up) were assessed for each observational study of surgical skills, sexual health messages, and out-of-hospital CPR.

Results

Description of included studies

The database searches and selection of articles from references yielded 2155 unique citations. After screening abstracts, the full texts of 362 articles were reviewed. One hundred and seventy-four articles were excluded during full text screening: 15 were non-research articles; 37 did not use crowdsourcing; 13 contests were described in two papers each and we used the study that most comprehensively described the contest; 68 did not have enough information; 29 studies only used crowdsourcing for recruitment; one study was not in English; eight studies were not clinically/medically related; one study was a duplicate not previously excluded; one study was a systematic review; and one study’s methodology was unclear. One hundred and eighty-eight studies met the inclusion criteria and four studies were pooled (Fig. 1).

Study characteristics

There were 183 observational studies and five RCTs. Nine studies were conducted in multiple countries, 166 studies were in high-income countries, 14 were in middle-income countries, and two were in low-income countries. Overall, 96 studies examined effectiveness, 127 examined feasibility, and 37 examined cost. Among those that examined effectiveness, all reported a behavioral outcome with the exception of two studies which reported a clinical outcome: measures of motor performance [27] and electrodermal activity [28].

Synthesizing evidence

We examined data from studies that evaluated surgical skills (17 studies) [29,30,31,32,33,34,35,36,37,38,39,40,41,42], generated sexual health messages (seven studies) [13, 43,44,45,46,47,48], developed systems for out-of-hospital cardiopulmonary resuscitation (six studies) [2,3,4,5,6,7], quantified malaria parasitemia (two studies) [15, 49], and generated messages for smoking cessation (three studies) [50,51,52].

Of the 17 studies that used crowdsourcing to evaluate surgical skills, 16 found the crowdsourcing evaluations were effective compared to expert evaluations. Crowdsourcing evaluation typically involves videotaping a surgeon performing a skill in the surgical theatre and then uploading it onto a platform where an online crowd worker evaluates skill based on pre-specified criteria (Fig. 2). All 16 studies paid non-expert, online, crowd workers small amounts of money to evaluate surgical skills. Sixteen studies compared crowdsourcing approaches to conventional expert-panel approaches (see Additional file 2: Table S8, Additional file 3: Table S9, Additional file 6: Table S12). Low quality evidence from these studies suggested that crowd evaluation of surgical skill technique correlated with expert evaluation (see Additional file 3: Table S9). Moderate quality evidence suggested that crowdsourcing evaluation was faster than expert evaluation (see Additional file 3: Table S9). Due to the heterogeneity of measures, we were only able to pool data from two of these studies with similar interventions and measures, with the results suggesting no difference between crowdsourced and expert evaluation (P = 0.29) (see Additional file 4: Figure S10).

Fig. 2
figure 2

Process of using crowdsourcing to evaluate surgical performance

Seven studies evaluated innovation design contests to develop sexual health messages (Fig. 3, Additional file 5: Table S11, Additional file 6: Table S12) [13, 43,44,45,46,47,48]. Six of these studies were focused on low and middle income countries (LMICs) (Swaziland, Namibia, Kenya, Senegal, Burkina Faso, Nigeria, China) [13, 43, 45,46,47,48] and one was in a high-income country (United States) [44]. Both quantitative sexual health studies were designed as non-inferiority studies and found similar effectiveness when comparing crowdsourcing and social marketing approaches (see Additional file 4: Figure S10) [46, 48]. Both reported substantial cost savings associated with crowdsourcing compared to a conventional approach [46, 48]. There was moderate quality evidence from four studies (two RCTs, two observational studies) supporting innovation design contests to increase HIV testing (see Additional file 7: Table S13). There was moderate quality evidence from six studies (two RCTs, four observational studies) supporting innovation design contests to increase sexual health communication among youth (see Additional file 7: Table S13).

Fig. 3
figure 3

Process of using crowdsourcing to increase HIV testing

Six studies evaluated out-of-hospital layperson-facilitated CPR (Fig. 4, see Additional file 8: Table S14, Additional file 9: Table S15, Additional file 10: Table S16) [2,3,4,5,6,7]. Two were RCTs conducted in high-income European countries (Sweden, Germany) which showed that bystander-initiated CPR was more frequent in the intervention group (using the smartphone app) but not necessarily faster [5, 7]. The four observational studies were also conducted in high-income countries (US, Japan, Sweden, Netherlands) [2,3,4, 6] and indicated the feasibility of the use of smartphone apps and SMS to increase layperson-facilitated CPR. We found moderate evidence to support smartphone apps and SMS to increase out-of-hospital CPR while emergency responders are en route. The data on using crowdsourced systems to improve time to CPR is mixed. The one RCT that failed to find a difference between a crowdsourced intervention and a control group had potential bias [7].

Fig. 4
figure 4

Process of using crowdsourcing to facilitate layperson CPR outside of the hospital. CPR: Cardiopulmonary resuscitation; SMS: Short message service

Five studies used crowdsourcing to develop artificial intelligence projects [53,54,55,56,57]. Four of these studies annotated medical data to train machine learning algorithms [53, 55,56,57]. One study found that a three-phase crowdsourcing challenge contest could be used to develop an artificial intelligence algorithm to segment lung tumors for radiation therapy [54]. The best algorithms developed from this challenge contests were similar in effectiveness to human experts.

Among the three studies evaluating crowdsourcing to spur smoking cessation, one study found that this approach was not effective [50], and one study found an increase in smoking cessation after the contest [51]. For quantifying malaria parasitemia, crowdsourcing was found to be effective in both of two studies [15, 58]. Two studies found that crowdsourcing could be used to effectively identify malaria species [59, 60]. Two studies examined crowdsourcing to enhance identification of seizures, both finding that it was effective [61, 62].

Discussion

Our systematic review identified crowdsourcing approaches using a variety of techniques and in different medical contexts. These data suggest crowdsourcing may be a useful tool in many settings. Evidence was most robust on crowdsourcing for evaluating surgical skills, increasing HIV testing, and organizing layperson assisted out-of-hospital CPR.

Strengths and limitations of study

Strengths of this systematic review include the following: an extensive search algorithm developed by an academic librarian with expertise in this field; duplicate assessment of citations, abstracts, and full texts; inclusion of several outcomes relevant to patients, physicians, and policy makers; and use of the GRADE approach to evaluate the evidence. Limitations of our review reflect problems with the individual studies that we included. First, the many differences in crowdsourced interventions and their measurement made it difficult to pool data. Second, given that crowdsourcing is an emergent approach to health problems, there were many potential search terms to identify crowdsourcing research studies. Third, few studies included data on cost and feasibility as outcomes. Fourth, the data included many observational studies and had other methodological limitations. Fifth, the large majority of studies were conducted in high-income countries, highlighting the need for greater research focused on LMIC settings.

In comparison with previous systematic reviews [18, 22], we included many more studies. This reflects the substantial growth in the field of crowdsourcing over the past several years. Our review helps to define this emerging approach, with greater rigor than earlier reviews. We included outcomes (cost, feasibility) that were not examined in other systematic reviews.

Evidence from 17 observational studies examining crowdsourcing to evaluate surgical skills suggests the usefulness of this approach. Evaluating surgical skill is critical for surgeons at all levels of training. However, surgical skill evaluation can take months when relying on video assessment from qualified surgeons [63]. A crowdsourcing approach could increase the efficiency, timeliness, and thoroughness of feedback [33]. Crowdsourcing is now routinely used for surgical skill evaluation by the American Urological Association, BlueCross BlueShield, and over twenty major medical centers [64]. A potential limitation of the evidence is that the data to support this approach have come exclusively from high-income countries. Further research on crowdsourcing for surgical skill evaluation in low- and middle-income countries is needed.

Data from seven studies, including two RCTs, also suggest that crowdsourcing is an effective and cost-saving method for creating sexual health messages. The utility of crowdsourcing in this field may be related to the extent to which social and behavioral norms influence the effectiveness of sexual health interventions. The extensive community engagement involved in crowdsourcing may help to improve the acceptability of the intervention among key affected populations by drawing directly upon community member perspectives [45, 46, 48]. Based on the evidence that crowdsourcing approaches can effectively promote sexual health, several local, regional and global policy-makers have recommended this practice [10, 65]. The UNICEF/UNDP/World Bank/WHO Special Programme for Research and Training in Tropical Diseases has used crowdsourcing in several projects [21, 66].

Six studies evaluated layperson facilitated out-of-hospital CPR. These included two RCTs and four observational studies, all conducted in HICs, which indicate that crowdsourcing approaches to out-of-hospital CPR may increase CPR initiation, but may not decrease the time to CPR initiation. A scientific statement from the American Heart Association identified crowdsourcing approaches to increase out-of-hospital CPR as a priority area [67]. These approaches require telecommunication infrastructure and emergency medical services that make LMIC implementation more difficult, although increased smart phone penetration present an opportunity for user-friendly apps.

We also found that crowdsourcing may be useful in the development of artificial intelligence projects. Four studies annotated medical data in order to train machine learning algorithms [53, 55,56,57]. Especially as crowdsourcing solicits input from large numbers of people, the resulting big data may provide a platform for machine learning. In addition, one open challenge was able to effecively develop a machine learning algorithm [54].

Our systematic review has implications for applying crowdsourcing approaches to inform health policy and research. From a policy perspective, the diverse LMIC settings and relatively low cost in the six sexual health message studies suggest that crowdsourcing for developing sexual health messages may be useful in other LMICs. A crowdsourcing approach could also be useful to inform the development of public health policy, for example, by developing strategies to scale-up hepatitis testing and improve service delivery [68]. From a research perspective, the lack of robust studies suggests the need for more randomized controlled trials with clinical outcomes. This is a major gap in the literature that requires attention. One example of an effective use of crowdsourcing in an RCT design includes a recently completed large-scale, eight-city study of crowdsourcing to promote HIV testing [18], which demonstrated the value of crowdsourcing for enhancing public health campaigns. This systematic review data can be used to refine and standardize crowdsourcing approaches for specific healthcare contexts.

This systematic review collected evidence from a broad range of topics in health and medicine where crowdsourcing has been implemented and evaluated. Crowdsourcing breaks new ground in health and medical research, introducing the potential for mass community engagement and community-driven interventions.

Conclusions

This systematic review found a wide range of evidence supporting the use of crowdsourcing in medicine. We found more robust research studies evaluating surgical skills, organizing out-of-hospital layperson CPR, and creating sexual health messages. These studies demonstrate a growing base of evidence to inform the use of crowdsourcing in artificial intelligence and related medical research. In addition, these studies suggest that crowdsourcing can broaden public engagement in medical research because members of the public can submit ideas, judge submissions, and serve on organizing committees. Further implementation and evaluation of crowdsourcing approaches are warranted.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CPR:

Cardiopulmonary resuscitation

HIV:

Human immunodeficiency virus

LMICs:

Low and middle income countries

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

RCT:

Randomized controlled trial

References

  1. Tucker JD, Day S, Tang W, Bayus B. Crowdsourcing in medical research: concepts and applications. PeerJ. 2019;6:e6762.

    Article  Google Scholar 

  2. Brooks SC. Community uptake of PulsePoint: Using smartphones to crowdsource basic life support for victims of out-of-hospital cardiac arrest. CJEM. 2013;15(Suppl):73.

    Google Scholar 

  3. Narikawa K, Sakamoto T, Kubota K, Suzukawa M, Yonekawa C, Yamashita K, et al. Predictability of the call triage protocol to detect if dispatchers should activate community first responders. Prehosp Disaster Med. 2014;29:484–8.

    Article  PubMed  Google Scholar 

  4. Ringh M, Fredman D, Nordberg P, Stark T, Hollenberg J. Mobile phone technology identifies and recruits trained citizens to perform CPR on out-of-hospital cardiac arrest victims prior to ambulance arrival. Resuscitation. 2011;82:1514–8.

    Article  PubMed  Google Scholar 

  5. Ringh M, Rosenqvist M, Hollenberg J, Jonsson M, Fredman D, Nordberg P, et al. Mobile-phone dispatch of laypersons for CPR in out-of-hospital cardiac arrest. N Engl J Med. 2015;372:2316–25.

    Article  CAS  PubMed  Google Scholar 

  6. Scholten AC, van Manen JG, van der Worp WE, Ijzerman MJ, Doggen CJ. Early cardiopulmonary resuscitation and use of automated external defibrillators by laypersons in out-of-hospital cardiac arrest using an SMS alert service. Resuscitation. 2011;82:1273–8.

    Article  PubMed  Google Scholar 

  7. Zanner R, Wilhelm D, Feussner H, Schneider G. Evaluation of M-AID, a first aid application for mobile phones. Resuscitation. 2007;74:487–94.

    Article  PubMed  Google Scholar 

  8. Budge EJ, Tsoti SM, Howgate DJ, Sivakumar S, Jalali M. Collective intelligence for translational medicine: crowdsourcing insights and innovation from an interdisciplinary biomedical research community. Ann Med. 2015;47:570–5.

    Article  PubMed  CAS  Google Scholar 

  9. Brabham DC, Ribisl KM, Kirchner TR, Bernhardt JM. Crowdsourcing applications for public health. Am J Prev Med. 2014;46:179–87.

    Article  PubMed  Google Scholar 

  10. Hildebrand M, Ahumada C, Watson S. CrowdOutAIDS: crowdsourcing youth perspectives for action. Reprod Health Matters. 2013;21:57–68.

    Article  PubMed  Google Scholar 

  11. Short RG, Middleton D, Befera NT, Gondalia R, Tailor TD. Patient-centered radiology reporting: using online crowdsourcing to assess the effectiveness of a web-based interactive radiology report. J Am Coll Radiol. 2017;14:1489–97.

    Article  PubMed  Google Scholar 

  12. Lee YJ, Arida JA, Donovan HS. The application of crowdsourcing approaches to cancer research: a systematic review. Cancer Med. 2017;6:2595–605.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Beres LK, Winskell K, Neri EM, Mbakwem B, Obyerodhyambo O. Making sense of HIV testing: social representations in young Africans’ HIV-related narratives from six countries. Glob Public Health. 2013;8:890–903.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Winskell K, Beres LK, Hill E, Mbakwem BC, Obyerodhyambo O. Making sense of abstinence: social representations in young Africans’ HIV-related narratives from six countries. Cult Health Sex. 2011;13:945–59.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Feng S, Woo MJ, Kim H, Kim E, Ki S, Shao L, et al. A game-based crowdsourcing platform for rapidly training middle and high school students to perform biomedical image analysis. In: Optics and Biophotonics in Low-Resource Settings II. SPIE; 2016. https://doi.org/10.1117/12.2212310.

  16. Chen C, White L, Kowalewski T, Aggarwal R, Lintott C, Comstock B, et al. Crowd-sourced assessment of technical skills: a novel method to evaluate surgical performance. J Surg Res. 2014;187:65–71.

    Article  PubMed  Google Scholar 

  17. Ong JJ, Bilardi JE, Tucker JD. Wisdom of the crowds: crowd-based development of a logo for a conference using a crowdsourcing contest. Sex Transm Dis. 2017;44:630–6.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Pan SW, Stein G, Bayus B, Tang W, Mathews A, Wang C, et al. Systematic review of innovation design contests for health: spurring innovation and mass engagement. BMJ Innovations. 2017;3:227–37.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Mathews A, Farley S, Blumberg M, Knight K, Hightow-Weidman L, Muessig K, et al. HIV cure research community engagement in North Carolina: a mixed-methods evaluation of a crowdsourcing contest. J Virus Erad. 2017;3:223–8.

    PubMed  PubMed Central  Google Scholar 

  20. Arora NK, Mohapatra A, Gopalan HS, Wazny K, Thavaraj V, Rasaily R, et al. Setting research priorities for maternal, newborn, child health and nutrition in India by engaging experts from 256 indigenous institutions contributing over 4000 research ideas: a CHNRI exercise by ICMR and INCLEN. J Glob Health. 2017;7:011003.

    Article  PubMed  PubMed Central  Google Scholar 

  21. WHO/TDR/SESH/SIHI. Crowdsourcing in Health and Health Research: A Practical Guide. Geneva: WHO/TDR; 2018. Available at: https://www.who.int/tdr/publications/year/2018/crowdsourcing-practical-guide/en/

    Google Scholar 

  22. Ranard BL, Ha YP, Meisel ZF, Asch DA, Hill SS, Becker LB, et al. Crowdsourcing--harnessing the masses to advance health and medicine, a systematic review. J Gen Intern Med. 2014;29:187–203.

    Article  PubMed  Google Scholar 

  23. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg. 2010;8:336–41.

    Article  PubMed  Google Scholar 

  24. JPT Higgins SG. Cochrane handbook for systematic reviews of interventions the Cochrane collaboration. 2011;Version 5.1.0.

    Google Scholar 

  25. Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Schunemann HBJ, Guyatt G, Oxman A. 5.2.1. Study limitations. In: GRADE Handbook; 2013.

    Google Scholar 

  27. Barak Ventura R, Nakayama S, Raghavan P, Nov O, Porfiri M. The role of social interactions in motor performance: feasibility study toward enhanced motivation in telerehabilitation. J Med Internet Res. 2019;21:e12708.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Chrisinger BW, King AC. Stress experiences in neighborhood and social environments (SENSE): a pilot study to integrate the quantified self with citizen science to improve the built environment and health. Int J Health Geogr. 2018;17:17.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Kowalewski TM, Comstock B, Sweet R, Schaffhausen C, Menhadji A, Averch T, et al. Crowd-sourced assessment of technical skills for validation of basic laparoscopic urologic skills tasks. J Urol. 2016;195:1859–65.

    Article  PubMed  Google Scholar 

  30. Maier-Hein L, Mersmann S, Kondermann D, Bodenstedt S, Sanchez A, Stock C, et al. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images. Med Image Comput Comput Assist Interv. 2014;17:438–45.

    PubMed  Google Scholar 

  31. Malpani A, Vedula SS, Chen CC, Hager GD. A study of crowdsourced segment-level surgical skill assessment using pairwise rankings. Int J Comput Assist Radiol Surg. 2015;10:1435–47.

    Article  PubMed  Google Scholar 

  32. Peabody J, Miller D, Lane B, Sarle R, Brachulis A, Linsell S, et al. PD30-05 wisdom of the crowds: use of crowdsourcing to assess surgical skill of robot-assisted radical prostatectomy in a statewide surgical collaborative. J Urol. 2015;193:e655–e6.

    Article  Google Scholar 

  33. Polin MR, Siddiqui NY, Comstock BA, Hesham H, Brown C, Lendvay TS, et al. Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills. Am J Obstet Gynecol. 2016;215:644.e1–7.

    Article  Google Scholar 

  34. Powers MK, Boonjindasup A, Pinsky M, Dorsey P, Maddox M, Su LM, et al. Crowdsourcing assessment of surgeon dissection of renal artery and vein during robotic partial nephrectomy: a novel approach for quantitative assessment of aurgical performance. J Endourol. 2016;30:447–52.

    Article  PubMed  Google Scholar 

  35. Vernez SL, Huynh V, Osann K, Okhunov Z, Landman J, Clayman RV. C-SATS: assessing surgical skills among urology residency applicants. J Endourol. 2017;31:S95–s100.

    Article  PubMed  Google Scholar 

  36. White LW, Lendvay TS, Holst D, Borbely Y, Bekele A, Wright A. Using crowd-assessment to support surgical training in the developing world. J Am Coll Surg. 2014;219:e40.

    Article  Google Scholar 

  37. Brady CJ, Villanti AC, Pearson JL, Kirchner TR, Gupta OP, Shah CP. Rapid grading of fundus photographs for diabetic retinopathy using crowdsourcing. J Med Internet Res. 2014;16:e233.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Deal SB, Lendvay TS, Haque MI, Brand T, Comstock B, Warren J, et al. Crowd-sourced assessment of technical skills: an opportunity for improvement in the assessment of laparoscopic surgical skills. Am J Surg. 2016;211:398–404.

    Article  PubMed  Google Scholar 

  39. Ghani KR, Miller DC, Linsell S, Brachulis A, Lane B, Sarle R, et al. Measuring to improve: peer and crowd-sourced assessments of technical skill with robot-assisted radical prostatectomy. Eur Urol. 2016;69:547–50.

    Article  PubMed  Google Scholar 

  40. Holst D, Kowalewski TM, White LW, Brand TC, Harper JD, Sorensen MD, et al. Crowd-sourced assessment of technical skills: differentiating animate surgical skill through the wisdom of crowds. J Endourol. 2015;29:1183–8.

    Article  PubMed  Google Scholar 

  41. Holst D, Kowalewski TM, White LW, Brand TC, Harper JD, Sorenson MD, et al. Crowd-sourced assessment of technical skills: an adjunct to urology resident surgical simulation training. J Endourol. 2015;29:604–9.

    Article  PubMed  Google Scholar 

  42. Aghdasi N, Bly R, White LW, Hannaford B, Moe K, Lendvay TS. Crowd-sourced assessment of surgical skills in cricothyrotomy procedure. J Surg Res. 2015;196:302–6.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Keller S. Media can contribute to better health. Network. 1997;17:29–31.

    CAS  PubMed  Google Scholar 

  44. Catallozzi M, Ebel SC, Chavez NR, Shearer LS, Mindel A, Rosenthal SL. Understanding perceptions of genital herpes disclosure through analysis of an online video contest. Sex Transm Infect. 2013;89:650–2.

    Article  PubMed  Google Scholar 

  45. Zhang Y, Kim JA, Liu F, Tso LS, Tang W, Wei C, et al. Creative contributory contests to spur innovation in sexual health: 2 cases and a guide for implementation. Sex Transm Dis. 2015;42:625–8.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Tang W, Mao J, Liu C, Mollan K, Zhang Y, Tang S, et al. Reimagining health communication: a noninferiority randomized controlled trial of crowdsourced intervention in China. Sex Transm Dis. 2019;46:172–8.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Tang W, Wei C, Cao B, Wu D, Li KT, Lu H, et al. Crowdsourcing to expand HIV testing among men who have sex with men in China: a closed cohort stepped wedge cluster randomized controlled trial. PLoS Med. 2018;15:e1002645.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Tang W, Han L, Best J, Zhang Y, Mollan K, Kim J, et al. Crowdsourcing HIV testing: a pragmatic, non-inferiority randomized controlled trial in China. Clin Infect Dis. 2016;62:1436–42.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Morris RR, Schueller SM, Picard RW. Efficacy of a web-based, crowdsourced peer-to-peer cognitive reappraisal platform for depression: randomized controlled trial. J Med Internet Res. 2015;17:e72.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Cella DF, Tulsky DS, Sarafian B, Thomas CR Jr, Thomas CR Sr. Culturally relevant smoking prevention for minority youth. J Sch Health. 1992;62:377–80.

    Article  CAS  PubMed  Google Scholar 

  51. Croghan IT, Campbell HM, Patten CA, Croghan GA, Schroeder DR, Novotny PJ. A contest to create media messages aimed at recruiting adolescents for stop smoking programs. J Sch Health. 2004;74:325–8.

    Article  PubMed  Google Scholar 

  52. Davis RM. Kids campaign against tobacco. Tob Control. 2003;12:243–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  53. Kalantarian H, Jedoui K, Washington P, Tariq Q, Dunlap K, Schwartz J, et al. Labeling images with facial emotion and the potential for pediatric healthcare. Artif Intell Med. 2019;98:77–86.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Mak RH, Endres MG, Paik JH, Sergeev RA, Aerts H, Williams CL, et al. Use of crowd innovation to develop an artificial intelligence–based solution for radiation therapy targeting. JAMA Oncol. 2019;5:654–61.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Cocos A, Qian T, Callison-Burch C, Masino AJ. Crowd control: effectively utilizing unscreened crowd workers for biomedical data annotation. J Biomed Inform. 2017;69:86–92.

    Article  PubMed  Google Scholar 

  56. Heim E, Ross T, Seitel A, Marz K, Stieltjes B, Eisenmann M, et al. Large-scale medical image annotation with crowd-powered algorithms. J Med Imaging (Bellingham). 2018;5:034002.

    Google Scholar 

  57. Lossio-Ventura JA, Hogan W, Modave F, Guo Y, He Z, Yang X, et al. OC-2-KB: integrating crowdsourcing into an obesity and cancer knowledge base curation system. BMC Med Inform Decis Mak. 2018;18:55.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Luengo-Oroz MA, Arranz A, Frean J. Crowdsourcing malaria parasite quantification: an online game for analyzing images of infected thick blood smears. J Med Internet Res. 2012;14:e167.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Linares M, Postigo M, Cuadrado D, Ortiz-Ruiz A, Gil-Casanova S, Vladimirov A, et al. Collaborative intelligence and gamification for on-line malaria species differentiation. Malar J. 2019;18:21.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Ortiz-Ruiz A, Postigo M, Gil-Casanova S, Cuadrado D, Bautista JM, Rubio JM, et al. Plasmodium species differentiation by non-expert on-line volunteers for remote malaria field diagnosis. Malar J. 2018;17:54.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Baldassano SN, Brinkmann BH, Ung H, Blevins T, Conrad EC, Leyde K, et al. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings. Brain. 2017;140:1680–91.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Kuhlmann L, Karoly P, Freestone DR, Brinkmann BH, Temko A, Barachant A, et al. Epilepsyecosystem.org: crowd-sourcing reproducible seizure prediction with long-term human intracranial EEG. Brain. 2018;141:2619–30.

    PubMed  PubMed Central  Google Scholar 

  63. Mitry D, Peto T, Hayat S, Morgan JE, Khaw KT, Foster PJ. Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK biobank eye and vision consortium. PLoS One. 2013;8:e71154.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  64. CSATS 2017 Website. Available at: http://www.csats.com/customers-main.

  65. WHO-WPR Regional Office. Biregional expert consultation on advancing implementation science on HIV/AIDS in Asia. Manila: WHO Asia Pacific Regional Office; 2015. https://iris.wpro.who.int/bitstream/handle/10665.1/13240/RS_2015_GE_62_JPN_eng.pdf.

  66. Liu E, Iwelunmor J, Gabagaya G, Anyasi H, Leyton A, Goraleski KA, et al. Women's global health leadership in LMICs. Lancet Glob Health. 2019;7:e1172–e3.

    Article  PubMed  Google Scholar 

  67. Rumsfeld JS, Brooks SC, Aufderheide TP, Leary M, Bradley SM, Nkonde-Price C, et al. Use of Mobile devices, social media, and crowdsourcing as digital strategies to improve emergency cardiovascular care: a scientific statement from the American Heart Association. Circulation. 2016;134:e87–e108.

    Article  PubMed  Google Scholar 

  68. Tucker JD, Meyers K, Best J, Kaplan K, Pendse R, Fenton KA, et al. The HepTestContest: a global innovation contest to identify approaches to hepatitis B and C testing. BMC Infect Dis. 2017;17:701.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to thank the Social Entrepreneurship to Spur Health (SESH) team in Guangzhou for administrative support. We would also like to thank the TDR Social Innovation in Health Initiative partners for helpful discussions and guidance. The SIHI network is supported by TDR, the Special Programme for Research and Training in Tropical Disease, co-sponsored by UNDP, UNICEF, the World Bank and WHO. TDR is able to conduct its work thanks to the commitment and support from a variety of funders. For the full list of TDR donors, please see: https://www.who.int/tdr/about/funding/en/. TDR receives additional funding from Sida, the Swedish International Development Cooperation Agency, to support SIHI.

Funding

This study was commissioned by the UNICEF/UNDP/World Bank/WHO Special Programme for Research and Training in Tropical Diseases. This project was also supported by NICHD UG3HD096929 and NIAID K24AI143471. The funder of the study had no role in the study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.

Author information

Authors and Affiliations

Authors

Contributions

CW, LH, GS, and JT contributed equally to this work. CW, GS, BB, and JT conceived of the idea for this study. JT, SD, CB-D, AM, and BB coordinated the systematic review. CW, LH, GS, and JT wrote the first draft of the manuscript. JW, GS, and CW designed the search strategy. CW, LH, GS, SD, and JT screened abstracts and full texts. LH, C B-D, AM, JO acquired the data and judged risk of bias. PZ undertook the pooling and helped to develop the statistical plan. RC developed the risk of bias plan and advised on study methodology. SW, AL, and AC developed the infographics and revised Tables. JT organized the project and is the guarantor. All authors interpreted the data analysis and critically revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Joseph D. Tucker.

Ethics declarations

Ethics approval and consent to participate

Not required.

Consent for publication

Not Applicable.

Competing interests

The authors declare that they have no competing interests.

Supplementary information

Additional file 1: Tables S1-S7.

Search algorithms for PubMed, Embase, CINAHL, Web of Science, PsychInfo, Cochrane Library, and ABI/Inform.

Additional file 2: Table S8.

Bias assessment of 17 studies examining a crowdsourcing approach to surgical technique evaluation.

Additional file 3: Table S9.

GRADE evidence profile for assessment of surgical performance.

Additional file 4: Figure S10.

Forrest plots for pooled RCT data examining smoking cessation studies (top panel), depression studies (middle panel), and sexual health studies (bottom panel).

Additional file 5: Table S11.

Bias assessment of four non-RCT studies evaluating innovation design contests to develop sexual health messages.

Additional file 6: Table S12.

Bias assessment of two RCT studies evaluating innovation design contests to develop sexual health messages.

Additional file 7: Table S13.

GRADE evidence profile for studies evaluating innovation design contests to develop sexual health messages.

Additional file 8: Table S14.

Bias assessment of non-RCT studies exploring out-of-hospital CPR.

Additional file 9: Table S15.

Bias assessment of RCT studies exploring out-of-hospital CPR.

Additional file 10: Table S16.

GRADE evidence profile for studies exploring out-of-hospital CPR.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Han, L., Stein, G. et al. Crowdsourcing in health and medical research: a systematic review. Infect Dis Poverty 9, 8 (2020). https://doi.org/10.1186/s40249-020-0622-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40249-020-0622-9

Keywords