Skip to main content

Challenges to evaluating complex interventions: a content analysis of published papers

Abstract

Background

There is continuing interest among practitioners, policymakers and researchers in the evaluation of complex interventions stemming from the need to further develop the evidence base on the effectiveness of healthcare and public health interventions, and an awareness that evaluation becomes more challenging if interventions are complex.

We undertook an analysis of published journal articles in order to identify aspects of complexity described by writers, the fields in which complex interventions are being evaluated and the challenges experienced in design, implementation and evaluation. This paper outlines the findings of this documentary analysis.

Methods

The PubMed electronic database was searched for the ten year period, January 2002 to December 2011, using the term “complex intervention*” in the title and/or abstract of a paper. We extracted text from papers to a table and carried out a thematic analysis to identify authors’ descriptions of challenges faced in developing, implementing and evaluating complex interventions.

Results

The search resulted in a sample of 221 papers of which full text of 216 was obtained and 207 were included in the analysis. The 207 papers broadly cover clinical, public health and methodological topics. Challenges described included the content and standardisation of interventions, the impact of the people involved (staff and patients), the organisational context of implementation, the development of outcome measures, and evaluation.

Conclusions

Our analysis of these papers suggests that more detailed reporting of information on outcomes, context and intervention is required for complex interventions. Future revisions to reporting guidelines for both primary and secondary research may need to take aspects of complexity into account to enhance their value to both researchers and users of research.

Peer Review reports

Background

There is continuing interest among practitioners, policymakers and researchers in the evaluation of complex interventions. This interest stems from the need to further develop the evidence base on the effectiveness of healthcare and public health interventions, and an awareness that evaluation becomes more challenging as interventions move along the spectrum from ‘simple’ towards more complex interventions [1]. This focus on complexity is also driven by ongoing debate about the most appropriate methods for evaluating health systems, and the recognition that it is important to know not just whether health system interventions ‘work’, but also about when, why, how and in what circumstances such interventions work well [2, 3].

A further stimulus has been the Medical Research Council’s (MRC) ‘A framework for development and evaluation of RCTs for complex interventions to improve health’, originally published in 2000 [4] and revised and extended in 2008 [5]. This guidance was published in response to the difficulties faced by those attempting to develop complex interventions and evaluate their impact. It describes complex interventions as being ‘built up from a number of components, which may act both independently and inter-dependently’ [4]. These components include behaviours, behaviour parameters and methods of organising those behaviours, and they may have an effect at individual patient level, organisational or service level or population level (or all of these in some cases). The MRC’s 2008 guidance also emphasises the numbers of components and their interactions, behaviours, organisational levels and outcomes, and goes further than the 2000 framework in outlining the variability of desired outcomes and the degree to which flexibility or tailoring of the intervention is permitted. Both documents highlight the importance of establishing both whether an intervention is effective and how it works.

The term ‘complex intervention’ is now used extensively in the academic health literature to describe both health service and public health interventions. Complex interventions have been the topic of numerous conferences and meetings, the focus of funding calls, and will be the subject of a new chapter in the Cochrane Handbook for Systematic Reviews of Interventions [6]. The common usage of the term indicates increasing recognition of complexity and its implications for the development and evaluation of interventions. It may also be the case that, as the term has achieved wider application, it has come to be used strategically by researchers to add authority and currency to funding proposals and academic articles. However, it is not always clear that ‘complexity’ is being used to refer to the same things, nor what measures researchers are taking to evaluate it. It has been suggested, for example, that what is described as ‘complexity’ is actually just ‘complicatedness’ – a very different concept [7].

We undertook an analysis of published journal articles in the field of health in which complexity was an important element. Our aim was to identify the aspects of complexity described by writers; the fields in which complex interventions are being evaluated; and to describe challenges experienced due to the complexity of interventions and how authors dealt with these. This paper outlines the findings of this documentary analysis focusing, in particular, on the challenges of designing, implementing and evaluating complex interventions described by authors.

Methods

Search strategy

The PubMed electronic database was searched for journal articles published in the ten year period, January 2002 to December 2011. The start date was chosen to allow enough time for papers referring to the MRC guidance (2000) to have been published. The search identified the term “complex intervention*” in the title and/or abstract of a paper, excluding papers not written in English. Research reports, trial protocols, systematic reviews, meta-analyses, discussion pieces, published oral presentations and letters were included. We then undertook a content analysis of the papers to identify authors’ descriptions of challenges faced in developing, implementing and evaluating complex interventions. The search was undertaken systematically as described above but we did not conduct a critical analysis of each paper as our principal aim was to provide a snapshot of current practice rather than a comprehensive review.

Analysis

Having read the papers, we extracted text to a table from each paper. Columns included title, author, study topic (e.g. clinical, public health, etc.), definitions of complex interventions used by authors, problems identified by authors (using the search terms: challenge, barrier, difficult, limit to identify difficulties described), and cited literature on complex interventions.

The process of analysing the papers’ content and identifying challenges described by authors produced a number of themes which we used to structure the results section. These are: intervention design (descriptions of challenges derived from the nature or content of the intervention); intervention implementation (challenges in implementing complex interventions); contextual characteristics (aspects of context that may influence implementation or evaluation of complex interventions); outcomes (reflecting the difficulties posed by the outcomes of complex interventions) and evaluation (describing challenges to evaluation). We are not suggesting that these themes are mutually exclusive. The design and content of an intervention, for example, are influenced by the context in which it will be implemented and the methodology used to evaluate it. Quotes from papers were selected to illustrate issues raised by authors.

Results

The search resulted in a sample of 221 papers of which full text of 216 (98%) was obtained and 207 were included in the analysis. Nine papers were excluded because their subject matter was not relevant for our purposes. A small number of the papers included were published online in 2011 but in print in 2012.

The 207 papers broadly cover clinical (45%), methodological (27%), health promotion (23%) and public health (3%) topics with a small number of ‘others’ (1%). All those included in the analysis are listed in Table 1. Some papers focus on particular health conditions, such as cancer, diabetes, HIV and mental illness; some on health and social interventions, including palliative care services, complementary therapies and decision aids; and others on methodological and theoretical issues such as causality, the use of normalisation process theory, and approaches to health promotion.

Table 1 Papers analysed by year of publication

Use of MRC guidance

As noted above, MRC guidance on the development and evaluation of complex interventions [4, 5] has been available since 2000. Without making assumptions about whether or not citation of these guidance documents or other key references reflects the content or quality of the papers listed, we report the proportions that did so. In all, 31% (n = 64) of papers cited either the 2000 or 2008 guidance or both. Nearly a quarter (23%) cited Campbell et al.’s paper, ‘Framework for design and evaluation of complex interventions to improve health’ [8] which accompanied the 2000 guidance and 12.5% (n = 26) cited Craig et al.’s paper [1] which accompanied the launch of the 2008 MRC guidance. Other highly cited papers were: Campbell et al.’s 2007 paper ‘Designing and evaluating complex interventions to improve health care’ [9] (n = 16), Hawe, Sheill and Riley’s 2004 ‘Complex interventions: how “out of control” can a randomised controlled trial be?’ [10] (n = 15) and Oakley et al.’s 2006 ‘Process evaluation in randomised controlled trials of complex interventions’ [11] (n = 12). 43% (n = 90) of papers did not cite the guidance documents or any of the above key references.

Intervention design

Here we outline the challenges described by writers in deciding upon and standardising the content of interventions which may include a number of components.

The value of a theoretical understanding

The MRC guidance advises that intervention design should be based on a theoretical understanding of how an intervention causes change. Some papers focused on the development of an explanatory framework or rationale to inform intervention design and evaluation. These included, for example, one aimed at identifying and differentiating the components of two approaches to acupuncture (biomedical and traditional). These authors describe using a ‘realist review’ approach to develop an analytical framework for their review:

‘Its first step is to uncover or identify the essential or implicit theory or theories that underlie an intervention, that is, how the intervention is thought or meant to work and its expected impacts.’ [12]

Another research team described the process of developing an optimal complex physical therapy intervention for patients with hip osteoarthritis ‘in light of current knowledge and expert opinion’ given the lack of understanding about how individual components of the therapy affect the disease process [13]. In this case, the development of a theoretical framework meant collecting evidence to help understand the aetiopathogenesis and physical impairments associated with the condition.

Others explained how they used existing models and theories to inform interventions and evaluation design. Drawing on previous studies, Borglin, Gustafsson and Krona [14] describe using the Theory of Planned Behaviour to develop a series of workshops for nurses to improve pain management for cancer patients. The Normalisation Process Model [15] was used as a theoretical framework in two RCTs in maternity care and was reported to be of value in understanding organisational contexts into which new models of care are introduced [16].

‘ …the use of this theoretical model will deepen our understanding of which factors contribute to the legitimacy of an intervention and thus the likelihood that it will be sustainable.’ [16]

Even if evidence is available, it may not be possible to predict which elements of an intervention will be acceptable to health care staff and patients, have the desired effect and be sustainable. It may also be difficult to define exactly what will constitute the intervention:

‘…developing precise inclusion criteria for such complex interventions is more problematic, because by definition it is not clear a priori which mechanisms have to be in place in order to define an intervention as “collaborative care”.’ [17]

Nonetheless, the authors report that developing a theoretical framework early in a study enables attention to be focused on what needs to be done to plan, implement and sustain an intervention and what is less important.

Standardisation and treatment fidelity

Implementing any intervention in a standard format across sites is not straightforward but standardising complex, multiple treatment interventions, which may have a number of interacting components, is difficult and, some researchers argue, standardising the form of an intervention rather than its function may not be appropriate [18]. Two main challenges to standardisation were identified in these papers: on the supply side, the likelihood of variation in the delivery of services (e.g. [19]), and, on the demand side, the wide range of patients’ diagnoses, stages of disease, needs and preferences (e.g. [20]).

‘Because of heterogeneity regarding settings, experiences, training, etc. and lack of standardisation, it is very difficult to compare different HPCTs [hospital palliative care teams]; hence the need for careful definition.’ [21]

‘In the example given […] of a Computer Decision Support System, is the intervention the software or the combination of the software and the staff working in the call centre?’ [15]

Attempting to standardise an intervention to meet the needs of researchers may lead to perverse outcomes:

‘The advantage of standardisation [in acupuncture interventions] must be offset against the disadvantage that such treatments, when obviously inadequate or inappropriate, cannot be modified, as would normally occur in routine clinical practice.’ [22]

A degree of flexibility in the design and implementation of interventions was advocated by a number of writers with the aim of ensuring that interventions could be adapted to both local circumstances and to patients’ needs.

‘…it is important to retain some flexibility, allowing adaptation of the intervention to the local context and ensuring the intervention can be tailored for individual OHC [oral health care] needs.’ [23]

As well as disparity in delivery, differences in the frequency of interventions and lack of a precise definition of the start of treatment were described (e.g. [24]). The MRC guidance [5] asserts that ‘any variation in the intervention needs recording, whether or not it is intended, so that fidelity can be assessed in relation to the degree of standardisation required by the study protocol’. Replicability would be compromised by undocumented variation.

In order to record how implementation is carried out on the ground, the authors of one paper (on the topic of secondary prevention of heart disease in general practice) suggest using a range of treatment fidelity procedures to monitor the intervention and to capture the processes involved. These procedures enhance validity and reliability with the aim of ‘reducing errors in the interpretation of study outcomes and attributing outcomes directly to the effect of the intervention’ [25]. Examples described include standardised training sessions, project manager observation, quality assurance visits to practices during intervention implementation, use of a structured recall system, research nurse observation of general practitioners and practice nurses during intervention consultations and use of practice and patient care plans to document the process of intervention delivery.

Intervention implementation

To implement an intervention one must think at an early stage about who will be responsible for what and in what setting [5]. In the case of complex interventions, there may be a number of individuals, institutions or agencies involved across several sites. After an intervention has been trialled or evaluated (and depending on the outcome), consideration should be given to its sustainability and the ease with which it can be integrated into usual service. In this section, we consider the challenges - ranging from the philosophical to the practical - identified by writers in implementing interventions.

‘Even when the concept of RRS [rapid response systems] is believed to be advantageous, the actual implementation entails overcoming a myriad of barriers: political, financial, educational, cultural, logistic, anthropological, and emotional.’ [26]

Structural and logistical obstacles may have an impact on effective implementation in the ‘real world’ where it is not always possible to control activities and outcomes.

‘Campus Watch has undergone many changes, both structural and functional, since it was introduced in 2007; its evolution has not been guided by an overarching design and modifications have occurred for reasons that have not always been well documented.’ [27]

Staffing issues

Those at the front line of ‘delivering’ an intervention may face time and resource difficulties or lack of buy-in with the aims of the intervention while there may be political and/or financial considerations further up the organisational hierarchy. The replication, regulation and sustainability of new practices in diverse teams across a number of sites can make heavy demands on staff who may experience competing priorities if they are also involved in data collection for evaluation purposes:

‘There was no systematic exploration of midwives’ views of working in the models post RCT, or of the views of other stakeholders such as non-team midwives, managers and obstetric staff during or after completion of the team RCT, nor during the subsequent iterations of the team model. Therefore it is not possible to draw conclusions about why the original evaluated model was not sustained.’ [16]

‘In …the area where breastfeeding rates did not improve, health professional support for the project was weaker and relationships between midwives and health visitors were problematic.’ [28]

‘It is clear that teachers found it difficult to deliver the programme for a variety of logistic reasons (low morale, lack of support and competing priorities at school) and contextual reasons (difficulty teaching about sensitive issues, switching from their traditional teacher role, and lack of trust between pupils and teachers).’ [29]

Implementing an intervention uniformly may create difficulties for clinicians whose first aim is to provide the most effective care to patients. The papers present examples of treatment that deviated from the protocol because of decisions made by staff:

‘At least two control patients are known to have received more intensive physical therapy, i.e. muscle-strength training, than they would have otherwise. We believe that once the surgeons sensed that patients receiving intensive physical therapy were responding well, the surgeons were likely to have encouraged their patients to get more physical therapy, thus further diluting the impact of the intervention.’ [30]

Patient issues

A number of issues relating to patients were raised by authors. These included patients’ preferences and patient/staff interaction, and recruitment and retention to trials. Studies about the treatment of chronic illness, for example, emphasised the role of patients (and carers) in active management of health conditions [31, 32] Less positively, one paper reported that for a number of reasons ‘despite initial willingness, after a few weeks some patients [suffering from psychosis] no longer wanted to receive therapy' [33]. A review on the topic of patients with medically unexplained symptoms reported that patients distrusted doctors regarding emotional aspects of their problems while doctors were concerned about encouraging patient dependence [34].

Those conducting trials reported that recruitment and retention of participants may be negatively affected if the intervention targets patients who are severely ill or who are hard to reach. Examples reported included patients with advanced dementia and their carers [35], those receiving palliative care [21] and young drug users [36]. In the first example, unbiased comparisons could not be made between intervention and control groups because of sample attrition [35]. In their consideration of the strengths and weaknesses of a before-after study design, Simon and Higginson [21] offer suggestions for strategies (including inclusion of a control group in research design, time series approaches, and more robust outcome measures) to control and limit secular trends, bias and confounders. Garfein and colleagues [36] describe one method used to retain participants:

‘Given the anticipated difficulty in retaining young IDUs [intravenous drug users]for a longitudinal study, follow-up window periods were designed such that the need for high retention was balanced with the need for uniform intervals between the intervention and follow-up assessments.’

In evaluating an intervention aimed at high-utilising patients with medically unexplained symptoms, Lyles and colleagues [37] describe how they achieved their impressive retention rate of 98%:

‘Remunerating participants in recognition of their time commitment helped to maintain interest. However, consistent, clear communication from project staff and persistence in contacting participants were also important factors in enrolling and retaining subjects. We maintained a communication link with participants at intervals throughout the project.’

Contextual characteristics

Complex interventions, by their nature, are more likely than simpler ones to depend for their success on the context in which they are implemented [38]. Authors described the impact of structural, capacity, professional and political factors on their introduction. The most commonly cited contextual barrier to implementation was the organisational context. As one author put it:

‘The findings concur with previous studies, which suggest that organisational environment and culture, and client factors may influence occupational therapy practice.’ [39].

Organisational context encompassed a wide range of elements from the parochial to the regional or national level and included organisational cultures, such as hierarchies and professional boundaries, staffing arrangements, social, geographical and environmental barriers, and the impact of other simultaneous organisational changes. The organisational context could either help or hinder the implementation of an intervention – or do both at the same time.

‘More attention should be given to the systems into which policies and complex interventions intervene. Particularly how the negative consequences of the environment, resource shortages, organisational change, competing demands and leadership affect an organisation’s ability to effectively deliver an intervention.’ [40]

‘The difficulties of delivering complex interventions in inner city areas are well known to clinicians, and might be attributed variously to low levels of social support, high levels of deprivation, and relative residential instability. Such contextual disadvantages remain a therapeutic challenge.’ [33]

‘Although the changing of long-term entrenched practices of physicians and other professionals is known to be a difficult task, problem solving in expanding cycles was able to affect such a change and produce an effective cervical cancer screening programme with no increase in financial resources.’ [41]

Another example of an organisational barrier to implementation was lack of support for what were seen as demanding projects. GP practice staff, for example, were thought to have few incentives for engaging in thinking through and developing complex new service arrangements:

‘Furthermore the external environment was not a sufficiently supportive context for the scope of the proposed shared care developments: it was seen as “a big project”.’[42]

Outcomes

Having established what outcome(s) an intervention is aiming to achieve, researchers face challenges in designing tools to effectively measure outcomes, understanding ‘the length and complexity of the causal chains linking intervention with outcome’ [5], explaining discrepancies between expected and observed outcomes, and capturing the long term characteristics of outcomes after a trial or study is concluded.

Multidimensional outcome measures

Outcomes are likely to be plural and multi-dimensional, spanning ‘the spectrum from mortality, morbidity, disability, to satisfaction and cost’ [43] as ‘restricting the success indicator to one single health or behavioural outcome leads to many unsolved questions about the success factors for, and barriers to, the effectiveness of the intervention’ [44]. Clinical pathways are aspects of complex interventions that may demand outcomes be measured across many domains including clinical, service, team, process and cost [38]. As well as breadth, outcome measures must take time into account and may be designed for the short, medium or long term or all three.

‘Given this degree of complexity identifying a single primary outcome measure to capture the impact of an OHC [oral health care] intervention is problematic. We would anticipate that a multifaceted OHC intervention would impact upon a range of components including for example dental referrals, staff knowledge and patients’ oral health.’ [23]

‘It is therefore critical that the impact of new models of care are rigorously evaluated, considering outcomes for women and infants as well as outcomes for midwives and other maternity care providers.’ [16]

Assessing outcomes

Apart from difficulties in deciding upon measureable outcomes imposed by the complexity of interventions, writers noted that there is now an expectation that the bio-psycho-social aspects of interventions be measured as well as the clinical ones [45]. In palliative care, for example, patient experience is the primary outcome [46]. In general, it is argued that patient-centred outcomes, such as quality of life, as well as the views and experiences of staff should be taken into account. Some authors suggested that methods of measuring outcomes did not always capture the positive impact of an intervention and, in some cases, described their use of qualitative data to measure patient experience (e.g. [47]).

‘The lack of an objective outcome was in contrast to subjective feedback from the study participants who felt that the intervention had produced a change in practice.’ [48]

‘Reliance on empirical and societal defined outcomes often hides success in terms of participant defined outcomes.’ [19]

Establishing ‘hard’ outcome measures was seen to be difficult in particular fields where the success of an intervention does not necessarily equate with patient improvement or survival.

‘The holistic approach of palliative care and its services causes some problems in defining clear outcomes and finding valid measurements.’ [21]

‘There is a lack of an accepted primary outcome regarding the use of decision aids. Possible categories to classify measures of effectiveness are knowledge, decision process (e.g., satisfaction and participation preference), decision outcomes (e.g., has a treatment decision been made, adherence), health status, and economic measures.’[49]

Some writers admitted that it was not possible to attribute the ‘active ingredient’ [4] of a complex intervention to a particular component of its design:

‘If this complex intervention does reduce mortality the relative contributions of education, PEWS [paediatric early warning system] and MET [medical emergency team] to clinical effectiveness is unknown.’ [50]

‘In many cases, the effectiveness of training is more difficult to measure because a wide range of variables unrelated to the training intervention can mediate both the training process and the outcome. These variables need to be considered if it is to be established whether an outcome is due to the training intervention or other unrelated factors. For instance, variables related to the individual have been shown to mediate impact on outcomes like stress and burnout levels, and staff satisfaction.’ [51]

Evaluation

The process of evaluating health service interventions occurs before, during and after implementation. In this section, we highlight some important issues raised by authors but do not systematically describe the many research designs which are the subject of the papers themselves.

Formative and process evaluation

The 2008 MRC guidance suggests that ‘A mixture of qualitative and quantitative methods is likely to be needed, for example to understand barriers to participation and to estimate response rates’ [5] to assess the feasibility of an intervention. As noted above, qualitative data are increasingly recognised as ‘an essential component of health services research’ [52], providing insights into the acceptability of interventions and their social consequences which cannot be measured by quantitative approaches. Formative evaluation – conducted to aid intervention design – can offer insights into the views and priorities of both patients and practitioners.

‘The key to the successful development of the complex intervention was the use of qualitative research that ensured that the intervention was based on data from interactions in ongoing trial recruitment appointments. Exploratory qualitative research of recruitment appointments in the Protect feasibility study showed that improvements to the presentation of study information increased rates of randomization from 30% to over 65%.’ [53]

Process evaluation is particularly important in multisite trials, ‘where the “same” intervention may be implemented and received in different ways’ [11].

‘Neither quantitative nor qualitative approaches alone would provide an adequate insight into the implementation of the intervention across all three levels of care, from the perspective of all involved and capture the information needed in relation to both effectiveness and feasibility issues.’ [23]

Discussion

Limitations

The number and range of papers discussed here are not comprehensive given the search terms used and database searched and selection bias is therefore possible. However, we feel that there is a large enough number included for our purposes. We conducted a content analysis rather than a systematic review which supported our aim of identifying aspects of complexity in health interventions, the fields in which they are implemented and the challenges experiences by researchers.

Summary of results

The literature on complex interventions is thick with descriptions of complex, challenging interventions, but thin on practical advice on how these should be dealt with. In the papers we surveyed, authors pointed to the practical value of theory in determining which features of an intervention and its context are likely to be important in influencing outcomes and determining sustainability. They caution against attempting to too narrowly define and standardise the intervention, drawing on Hawe and colleagues lead (standardising on ‘function’ rather than ‘form’) [10]. This also means having procedures in place to document what is actually done under the heading of ‘the intervention’.

The interaction between intervention and context is frequently emphasised, and one aspect of context which is highlighted in several papers is the people involved, including staff and patients themselves. The MRC guidance notes that complexity may derive from interaction between patient or recipient and provider. The implication for implementation and evaluation is that (in the case of healthcare interventions) barriers at both levels should be considered and mitigated and, in the case of evaluation, relevant data collected. These barriers could also be built into the initial logic model driving the evaluation [54]. The papers also point to the wide range of contexts which have been considered as relevant, including professional boundaries and hierarchies, which do not often feature in descriptions of context, but are clearly relevant in some of these examples. In one study the specific recommendation is made that attention should be given to the systems into which complex interventions are placed [40]. In practical terms this may mean describing those systems in detail and at different levels and theorising on how they may affect the effectiveness of implementation.

Several studies point to a multiplicity of health and non-health outcomes as a source of complexity. In many of the papers which raise this as an issue, there is an implicit need for outcome measures – or a range of outcome approaches – capable of capturing outcomes across different dimensions and time scales. This may imply a move away from a focus on primary outcomes and a small number of secondary outcomes towards a much more multi-criteria form of assessment which acknowledges the multiple objectives of many complex interventions.

Conclusions

Implications

The above comments may have implications for reporting of studies of complex interventions. The quotes suggest that more detailed reporting of information on outcomes, context and intervention is required for complex interventions. However, reporting guidelines for quantitative studies may require further adaptation to enable adequate explanation of complex interventions, and the contexts within which they were implemented. Defining and describing context, for example, may prove particularly challenging and, given the inherent flexibility in complex interventions themselves, even defining the intervention may be difficult. Future revisions to reporting guidelines for both primary and secondary research may need to take aspects of complexity into account to enhance their value to both researchers and users of research.

References

  1. Craig P, Dieppe P, Macintyre S, Ritchie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008, 337: a1655-10.1136/bmj.a1655.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Treweek S: Complex interventions and the chamber of secrets: understanding why they work and why they do not. J R Soc Med. 2005, 98: 553-10.1258/jrsm.98.12.553.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Mills A, Gilson L, Hanson K, Palmer N, Lagarde M: What do we mean by rigorous health-systems research?. Lancet. 2008, 372: 1527-1529. 10.1016/S0140-6736(08)61633-5.

    Article  PubMed  Google Scholar 

  4. Medical Research Council: A framework for development and evaluation of RCTs for complex interventions to improve health. 2000, London: MRC

    Google Scholar 

  5. Medical Research Council: Developing and evaluating complex interventions: new guidance. 2008, London: MRC

    Google Scholar 

  6. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. Edited by: Higgins JPT, Green S. 2011, London: The Cochrane Collaboration

    Google Scholar 

  7. Sheill A, Hawe P, Gold L: Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008, 336: 1281-10.1136/bmj.39569.510521.AD.

    Article  Google Scholar 

  8. Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, Tyrer P: Framework for design and evaluation of complex interventions to improve health. BMJ. 2000, 321: 694-10.1136/bmj.321.7262.694.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Campbell NC, Murray E, Darbyshire J, Emery J, Farmer A, Griffiths F, Guthrie B, Lester H, Wilson P, Kinmonth AL: Designing and evaluating complex interventions to improve health care. BMJ. 2007, 334: 455-10.1136/bmj.39108.379965.BE.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Hawe P, Shiell A, Riley T: Complex interventions: how “out of control” can a randomised controlled trial be?. BMJ. 2004, 328: 1561-10.1136/bmj.328.7455.1561.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Oakley A, Strange V, Bonell C, Allen E, Stephenson J: Process evaluation in randomised control trials of complex interventions. BMJ. 2006, 332: 413-10.1136/bmj.332.7538.413.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Price S, Long AF, Godfrey M, Thomas KJ: Getting inside acupuncture trials -exploring intervention theory and rationale. BMC Complement Altern Med. 2011, 11: 22-10.1186/1472-6882-11-22.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Bennell KL, Egerton T, Pua YH J, Abbott JH, Sims K, Buchbinder R: Building the rationale and structure for a complex physical therapy intervention within the context of a clinical trial: a multimodal individualised treatment for patients with hip osteoarthritis. Phys The. 2011, 91: 1-17.

    Google Scholar 

  14. Borglin G, Gustaffson M, Krona H: A theory-based educational intervention targeting nurses’ attitudes and knowledge concerning cancer-related pain management: a study protocol of a quasi-experimental design. BMC Health Serv Res. 2011, 11: 233-10.1186/1472-6963-11-233.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, Finch T, Kennedy A, Mair F, O’Donnell C, Ong BN, Rapley T, Rogers A, May C: Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010, 8: 63-10.1186/1741-7015-8-63.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Forster DA, Newton M, McLachlan HL, Willis K: Exploring implementation and sustainability of models of care: can theory help?. BMC Publ Health. 2011, 11 (Suppl 5): S8-10.1186/1471-2458-11-S5-S8.

    Article  Google Scholar 

  17. Bower P, Gilbody S, Richards D, Fletcher J, Sutton A: Collaborative care for depression in primary care: making sense of a complex intervention: systematic review and meta-regression. Br J Psychiatry. 2006, 189: 484-493. 10.1192/bjp.bp.106.023655.

    Article  PubMed  Google Scholar 

  18. Hawe P, Shiell A, Riley T: In response to Spillane V., Byrne M.C., Byrne M., Leathem C.S., O’Malley M. & Cupples M.E. (2007) Monitoring treatment fidelity in a randomized trial of a complex intervention. Journal of Advanced Nursing 60(3), 343–352. J Adv Nurs. 2008, 62: 267-10.1111/j.1365-2648.2008.04686.x.

    Article  PubMed  Google Scholar 

  19. Wilson PM: The UK Expert Patients Programme: lessons learned and implications for cancer survivors' self-care support programmes. J Cancer Surviv. 2008, 2: 45-52. 10.1007/s11764-007-0040-z.

    Article  PubMed  Google Scholar 

  20. Bird L, Arthur A, Cox K: ‘Did the trial kill the intervention?’ experiences from the development, implementation and evaluation of a complex intervention. BMC Med Res Methodol. 2011, 11: 24-10.1186/1471-2288-11-24.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Simon S, Higginson IJ: Evaluation of hospital palliative care teams: strengths and weaknesses of the before-after study design and strategies to improve it. Palliat Med. 2009, 23: 23-28.

    Article  CAS  PubMed  Google Scholar 

  22. Schroer S, Adamson J: Acupuncture for depression: a critique of the evidence base. CNS Neurosci Ther. 2011, 17: 398-410. 10.1111/j.1755-5949.2010.00159.x.

    Article  PubMed  Google Scholar 

  23. Brady MC, Stott DJ, Norrie J, Chalmers C, St George B, Sweeney PM, Langhorne P: Developing and evaluating the implementation of a complex intervention: using mixed methods to inform the design of a randomised controlled trial of an oral healthcare intervention after stroke. Trials. 2011, 12: 168-10.1186/1745-6215-12-168.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Turner DE, Helliwell PS, Woodburn J: Methodological considerations for a randomised controlled trial of podiatry care in rheumatoid arthritis: lessons from an exploratory trial. BMC Musculoskelet Disord. 2007, 8: 109-10.1186/1471-2474-8-109.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Spillane V, Byrne MC, Byrne M, Leathem CS, O’Malley M, Cupples ME: Monitoring treatment fidelity in a randomized controlled trial of a complex intervention. J Adv Nurs. 2007, 60: 343-352. 10.1111/j.1365-2648.2007.04386.x.

    Article  PubMed  Google Scholar 

  26. Tee A, Calzavacca P, Licari E, Goldsmith D, Bellemo R: Bench-to-bedside review: the MET syndrome–the challenges of researching and adopting medical emergency teams. Crit Care. 2008, 12: 205-10.1186/cc6426.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Cousins K, Connor JL, Kypri K: Reducing alcohol-related harm and social disorder in a university community: a framework for evaluation. Inj Prev. 2010, 16: e1-

    Article  PubMed  Google Scholar 

  28. Hoddinott P, Pill R, Chalmers M: Health professionals, implementation and outcomes: reflections on a complex intervention to improve breastfeeding rates in primary care. Fam Prac. 2007, 24: 84-91.

    Article  Google Scholar 

  29. Power R, Langhaug L, Cowan F: ‘But there are no snakes in the wood’: risk mapping as an outcome measure in evaluating complex interventions. Sex Transm Infect. 2007, 83: 232-236.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Allegrante JP, Peterson MGE, Cornell CN, MacKenzie : Methodological challenges of multiple-component intervention: lessons learned from a randomized controlled trial of functional recovery after hip fracture. HSS J. 2007, 3: 63-70. 10.1007/s11420-006-9036-x.

    Article  PubMed  Google Scholar 

  31. Schiapparelli P, Allais G, Rolando S, Airola G, Borgogno P, Terzi MG, Benedetto C: Acupuncture in primary headache treatment. Neurol Sci. 2011, 32 (Suppl 1): S15-S18.

    Article  PubMed  Google Scholar 

  32. Farquhar M, Higginson IJ, Fagan P, Booth S: Results of a pilot investigation into a complex intervention for breathlessness in advanced chronic obstructive pulmonary disease (COPD): brief report. Palliat Support Care. 2010, 8: 143-149. 10.1017/S1478951509990897.

    Article  PubMed  Google Scholar 

  33. Dunn G, Fowler D, Rollinson R, Freeman D, Kuipers E, Smith B, Steel C, Onwumere J, Jolley S, Garety P, Bebbington P: Effective elements of cognitive behaviour therapy for psychosis: results of a novel type of subgroup analysis based on principal stratification. Psychol Med. 2011, 42: 1057-1068.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Gask L, Dowrick C, Salmon P, Peters S, Morriss R: Reattribution reconsidered: narrative review and reflections on an educational intervention for medically unexplained symptoms in primary care settings. J Psychosom Res. 2011, 71: 325-334. 10.1016/j.jpsychores.2011.05.008.

    Article  PubMed  Google Scholar 

  35. Sampson EL, Jones L, Thuné-Boyle ICV, Kukkastenvehmas R, King M, Leurent B, Tookman A, Blanchard MR: Palliative assessment and advance care planning in severe dementia: an exploratory randomized controlled trial of a complex intervention. Palliat Med. 2011, 25: 197-209. 10.1177/0269216310391691.

    Article  PubMed  Google Scholar 

  36. Garfein RS, Swartzendruber A, Ouellet LJ, Kapadia F, Hudson SM, Thiede H, Strathdee SA, Williams IT, Bailey SL, Hagan H, Golub ET, Kerndt P, Hanson DL, Latka MH for the DUIT Study Team: Methods to recruit and retain a cohort of young-adult injection drug users for the Third Collaborative Injection Drug Users Study/Drug Users Intervention Trial (CIDUS III/DUIT). Drug Alcohol Depend. 2007, 91 (Suppl 1): S4-S17.

    Article  PubMed  Google Scholar 

  37. Lyles JS, Hodges A, Collins C, Lein C, Given CW, Given B, D’Mello D, Osborn GG, Goddeeris J, Gardiner JC, Smith RC: Using nurse practitioners to implement an intervention in primary care for high-utilizing patients with medically unexplained symptoms. Gen Hosp Psychiatry. 2003, 25: 63-73. 10.1016/S0163-8343(02)00288-8.

    Article  PubMed  Google Scholar 

  38. Van Herck P, Vanhaecht K, Deneckere S, Bellemans J, Panella , Barbieri A, Sermeus W: Key interventions and outcomes in joint arthroplasty clinical pathways: a systematic review. J Eval Clin Pract. 2010, 16: 39-49. 10.1111/j.1365-2753.2008.01111.x.

    Article  PubMed  Google Scholar 

  39. Gustafsson L, Nugent N, Biron L: Occupational therapy practice in hospital-based stroke rehabilitation?. Scand J Occup Ther. 2012, 19: 132-139. 10.3109/11038128.2011.562915.

    Article  PubMed  Google Scholar 

  40. Hoddinott P, Britten J, Pill R: Why do interventions work in some places and not others: a breastfeeding support group trial. Soc Sci Med. 2010, 70: 769-778. 10.1016/j.socscimed.2009.10.067.

    Article  PubMed  Google Scholar 

  41. Salas I: Methodology for reorganization of the cervical cancer program in Chile. Cancer Detect Prev. 2006, 30: 38-43. 10.1016/j.cdp.2005.11.003.

    Article  PubMed  Google Scholar 

  42. Byng R, Norman I, Refern S, Jones R: Exposing the key functions of a complex intervention for shared care in mental health: case study of a process evaluation. BMC Health Serv Res. 2008, 8: 274-10.1186/1472-6963-8-274.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Mayo NE, Scott S: Evaluating a complex intervention with a single outcome may not be a good idea: an example from a randomised trial of stroke case management. Age Ageing. 2011, 40: 718-724. 10.1093/ageing/afr061.

    Article  PubMed  Google Scholar 

  44. de Vlaming R, Haveman-Nies A, de Groot LCPGM, van’t Veer P: Evaluation design for a complex intervention program targeting loneliness in non-institutionalized elderly Dutch people. BMC Publ Health. 2010, 10: 552-10.1186/1471-2458-10-552.

    Article  Google Scholar 

  45. Klinkhammer-Schalke M, Koller M, Ehret C, Steinger B, Ernst B, Wyatt JC, Hofstädter F, Lorenz W, for the Regensburg QoL Study Group: Implementing a system of quality-of-life diagnosis and therapy for breast cancer patients: results of an exploratory trial as a prerequisite for a subsequent RCT. Br J Cancer. 2008, 99: 415-422. 10.1038/sj.bjc.6604505.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Farquhar MC, Ewing G, Booth S: Using mixed methods to develop and evaluate complex interventions in palliative care research. Palliat Med. 2011, 25: 748-757. 10.1177/0269216311417919.

    Article  PubMed  Google Scholar 

  47. Paterson C, Britten N: Acupuncture as a complex intervention: a holistic model. J Altern Complement Med. 2004, 10: 791-801.

    Article  PubMed  Google Scholar 

  48. Rowlands G, Sims J, Kerry S: A lesson learnt: the importance of modelling in randomized controlled trials for complex interventions in primary care. Fam Pract. 2005, 22: 132-139.

    Article  PubMed  Google Scholar 

  49. Hirsch O, Keller H, Krones T, Donner-Banzhoff N: Acceptance of shared decision making with reference to an electronic library of decision aids (arriba-lib) and its association to decision making in patients: an evaluation study. Implement Sci. 2011, 6: 70-10.1186/1748-5908-6-70.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Edwards ED, Mason BW, Oliver A, Powell CVE: Cohort study to test the predictability of the Melbourne criteria for activation of the medical emergency team. Arch Dis Child. 2011, 96: 174-179. 10.1136/adc.2010.187617.

    Article  CAS  PubMed  Google Scholar 

  51. Drescher U, Warren F, Norton K: Towards evidence-based practice in medical training: making evaluations more meaningful. Med Educ. 2004, 38: 1288-1294. 10.1111/j.1365-2929.2004.02021.x.

    Article  PubMed  Google Scholar 

  52. Pope C, Mays N: Qualitative research: reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ. 1995, 311: 42-10.1136/bmj.311.6996.42.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  53. Donovan JL, Lane JA, Peters TJ, Brindle L, Salter E, Gillatt D, Powell P, Bollina P, Neal DE, Hamdy FC: Development of a complex intervention improved randomisation and informed consent in a randomised controlled trial. J Clin Epidemiol. 2009, 62: 29-36. 10.1016/j.jclinepi.2008.02.010.

    Article  PubMed  Google Scholar 

  54. Anderson L, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P: Using logic models to capture complexity in systematic reviews. Res Synth Methods. 2011, 2: 33-42. 10.1002/jrsm.32.

    Article  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

Funding statement: This project was supported by the International Collaboration on Complex Interventions (ICCI). ICCI was funded by the Canadian Institutes of Health Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jessica Datta.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

MP conceived the idea for the paper. MP and JD developed the methodology. JD carried out the search and content analysis and wrote the methodology and results section. The introduction was written by MP and JD and the discussion and conclusions by MP. Both authors read and approved the final manuscript.

Jessica Datta and Mark Petticrew contributed equally to this work.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Datta, J., Petticrew, M. Challenges to evaluating complex interventions: a content analysis of published papers. BMC Public Health 13, 568 (2013). https://doi.org/10.1186/1471-2458-13-568

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2458-13-568

Keywords