U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of trials

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

Aileen grant.

1 School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB UK

Carol Bugge

2 Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA UK

3 Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP UK

Associated Data

No data and materials were used.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, {"type":"clinical-trial","attrs":{"text":"NCT01425502","term_id":"NCT01425502"}} NCT01425502 - OPAL - ISRCTN57746448

Contribution to the literature

  • We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.
  • We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.
  • Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.
  • We argue that case study can illustrate how components have evolved and been redefined through implementation.
  • Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 – 14 ]; a need for better use, critique and development of theories [ 15 – 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 – 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 – 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Data-driven Quality Improvement in Primary Care (DQIP)

Optimising Pelvic Floor Exercises to Achieve Long-term benefits (OPAL)

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

Abbreviations

Authors’ contributions.

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

No funding was received for this work.

Availability of data and materials

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Aileen Grant, Email: [email protected] .

Carol Bugge, Email: [email protected] .

Mary Wells, Email: [email protected] .

  • Open access
  • Published: 27 November 2020

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

  • Aileen Grant 1 ,
  • Carol Bugge 2 &
  • Mary Wells 3  

Trials volume  21 , Article number:  982 ( 2020 ) Cite this article

11k Accesses

10 Citations

5 Altmetric

Metrics details

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, NCT01425502 - OPAL - ISRCTN57746448

Peer Review reports

Contribution to the literature

We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.

We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.

Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.

We argue that case study can illustrate how components have evolved and been redefined through implementation.

Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 , 12 , 13 , 14 ]; a need for better use, critique and development of theories [ 15 , 16 , 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 , 34 , 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 , 38 , 39 , 40 , 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Availability of data and materials

No data and materials were used.

Abbreviations

Data-driven Quality Improvement in Primary Care

Medical Research Council

Nonsteroidal anti-inflammatory drugs

Optimizing Pelvic Floor Muscle Exercises to Achieve Long-term benefits

Blencowe NB. Systematic review of intervention design and delivery in pragmatic and explanatory surgical randomized clinical trials. Br J Surg. 2015;102:1037–47.

Article   CAS   PubMed   Google Scholar  

Dixon-Woods M. The problem of context in quality improvement. In: Foundation TH, editor. Perspectives on context: The Health Foundation; 2014.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95.

Article   PubMed   PubMed Central   Google Scholar  

Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72.

Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–63.

Article   PubMed   Google Scholar  

Ward V, House AF, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy. 2009;14(3):156–64.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Yin R. Case study research and applications: design and methods. Los Angeles: Sage Publications Inc; 2018.

Google Scholar  

Stake R. The art of case study research. Thousand Oaks, California: Sage Publications Ltd; 1995.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, et al. Process evaluation of complex interventions: Medical Research Council guidance. Br Med J. 2015;350.

Hawe P. Minimal, negligible and negligent interventions. Soc Sci Med. 2015;138:265–8.

Moore GF, Evans RE, Hawkins J, Littlecott H, Melendez-Torres GJ, Bonell C, Murphy S. From complex social interventions to interventions in complex social systems: future directions and unresolved questions for intervention development and evaluation. Evaluation. 2018;25(1):23–45.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, Greaves F, Harper L, Hawe P, Moore L, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

Moore G, Cambon L, Michie S, Arwidson P, Ninot G, Ferron C, Potvin L, Kellou N, Charlesworth J, Alla F, et al. Population health intervention research: the place of theories. Trials. 2019;20(1):285.

Kislov R. Engaging with theory: from theoretically informed to theoretically informative improvement research. BMJ Qual Saf. 2019;28(3):177–9.

Boulton R, Sandall J, Sevdalis N. The cultural politics of ‘Implementation Science’. J Med Human. 2020;41(3):379-94. h https://doi.org/10.1007/s10912-020-09607-9 .

Cheng KKF, Metcalfe A. Qualitative methods and process evaluation in clinical trials context: where to head to? Int J Qual Methods. 2018;17(1):1609406918774212.

Article   Google Scholar  

Richards DA, Bazeley P, Borglin G, Craig P, Emsley R, Frost J, Hill J, Horwood J, Hutchings HA, Jinks C, et al. Integrating quantitative and qualitative data and findings when undertaking randomised controlled trials. BMJ Open. 2019;9(11):e032081.

Thomas G. How to do your case study, 2nd edition edn. London: Sage Publications Ltd; 2016.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: case study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. BMJ Open. 2017;7(3).

Pfadenhauer L, Rohwer A, Burns J, Booth A, Lysdahl KB, Hofmann B, Gerhardus A, Mozygemba K, Tummers M, Wahlster P, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the Context and Implementation of Complex Interventions (CICI) framework: Integrate-HTA; 2016.

Bate P, Robert G, Fulop N, Ovretveit J, Dixon-Woods M. Perspectives on context. London: The Health Foundation; 2014.

Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011;20.

Medical Research Council: Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2015.

May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11(1):141.

Bate P. Context is everything. In: Perpesctives on Context. The Health Foundation 2014.

Horton TJ, Illingworth JH, Warburton WHP. Overcoming challenges in codifying and replicating complex health care interventions. Health Aff. 2018;37(2):191–7.

O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Bunn H, Graham I, Drake E. A decision aid for women considering hormone therapy after menopause: decision support framework and evaluation. Patient Educ Couns. 1998;33:267–79.

Creswell J, Poth C. Qualiative inquiry and research design, fourth edition edn. Thousan Oaks, California: Sage Publications; 2018.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Takahashi ARW, Araujo L. Case study research: opening up research opportunities. RAUSP Manage J. 2020;55(1):100–11.

Tight M. Understanding case study research, small-scale research with meaning. London: Sage Publications; 2017.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43:535.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice. A consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Pawson R, Tilley N. Realist evaluation. London: Sage; 1997.

Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing - a trial of education, informatics & financial incentives. N Engl J Med. 2016;374:1053–64.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing. Implement Sci. 2017;12(1):4.

Dreischulte T, Grant A, Hapca A, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: quantitative examination of variation between practices in recruitment, implementation and effectiveness. BMJ Open. 2018;8(1):e017133.

Grant A, Dean S, Hay-Smith J, Hagen S, McClurg D, Taylor A, Kovandzic M, Bugge C. Effectiveness and cost-effectiveness randomised controlled trial of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL (Optimising Pelvic Floor Exercises to Achieve Long-term benefits) trial mixed methods longitudinal qualitative case study and process evaluation. BMJ Open. 2019;9(2):e024152.

Hagen S, McClurg D, Bugge C, Hay-Smith J, Dean SG, Elders A, Glazener C, Abdel-fattah M, Agur WI, Booth J, et al. Effectiveness and cost-effectiveness of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL randomised trial. BMJ Open. 2019;9(2):e024153.

Steckler A, Linnan L. Process evaluation for public health interventions and research; 2002.

Durlak JA. Why programme implementation is so important. J Prev Intervent Commun. 1998;17(2):5–18.

Bonell C, Oakley A, Hargreaves J, VS, Rees R. Assessment of generalisability in trials of health interventions: suggested framework and systematic review. Br Med J. 2006;333(7563):346–9.

Article   CAS   Google Scholar  

Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.

Yin R. Case study research: design and methods. London: Sage Publications; 2003.

Bugge C, Hay-Smith J, Grant A, Taylor A, Hagen S, McClurg D, Dean S: A 24 month longitudinal qualitative study of women’s experience of electromyography biofeedback pelvic floor muscle training (PFMT) and PFMT alone for urinary incontinence: adherence, outcome and context. ICS Gothenburg 2019 2019. https://www.ics.org/2019/abstract/473 . Access 10.9.2020.

Suzanne Hagen, Andrew Elders, Susan Stratton, Nicole Sergenson, Carol Bugge, Sarah Dean, Jean Hay-Smith, Mary Kilonzo, Maria Dimitrova, Mohamed Abdel-Fattah, Wael Agur, Jo Booth, Cathryn Glazener, Karen Guerrero, Alison McDonald, John Norrie, Louise R Williams, Doreen McClurg. Effectiveness of pelvic floor muscle training with and without electromyographic biofeedback for urinary incontinence in women: multicentre randomised controlled trial BMJ 2020;371. https://doi.org/10.1136/bmj.m3719 .

Cook TD. Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. Ann Am Acad Pol Soc Sci. 2005;599(1):176–98.

Hoffmann T, Glasziou P, Boutron I, Milne R, Perera R, Moher D. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348.

Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? Br Med J. 2004;328(7455):1561–3.

Grant A, Dreischulte T, Treweek S, Guthrie B. Study protocol of a mixed-methods evaluation of a cluster randomised trial to improve the safety of NSAID and antiplatelet prescribing: Data-driven Quality Improvement in Primary Care. Trials. 2012;13:154.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12(2):219–45.

Thorne S. The great saturation debate: what the “S word” means and doesn’t mean in qualitative research reporting. Can J Nurs Res. 2020;52(1):3–5.

Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Rycroft-Malone J. The PARIHS framework: a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;4:297-304.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

Cresswell JW, Plano Clark VL. Designing and conducting mixed methods research. Thousand Oaks: Sage Publications Ltd; 2007.

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

Craig P, Ruggiero E, Frohlich KL, Mykhalovskiy E, White M. Taking account of context in population health intervention research: guidance for producers, users and funders of research: National Institute for Health Research; 2018. https://www.ncbi.nlm.nih.gov/books/NBK498645/pdf/Bookshelf_NBK498645.pdf .

Download references

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

No funding was received for this work.

Author information

Authors and affiliations.

School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB, UK

Aileen Grant

Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA, UK

Carol Bugge

Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

Corresponding author

Correspondence to Aileen Grant .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Grant, A., Bugge, C. & Wells, M. Designing process evaluations using case study to explore the context of complex interventions evaluated in trials. Trials 21 , 982 (2020). https://doi.org/10.1186/s13063-020-04880-4

Download citation

Received : 09 April 2020

Accepted : 06 November 2020

Published : 27 November 2020

DOI : https://doi.org/10.1186/s13063-020-04880-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Process evaluation
  • Case study design

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

critical case study evaluation

  • Open access
  • Published: 10 November 2020

Case study research for better evaluations of complex interventions: rationale and challenges

  • Sara Paparini   ORCID: orcid.org/0000-0002-1909-2481 1 ,
  • Judith Green 2 ,
  • Chrysanthi Papoutsi 1 ,
  • Jamie Murdoch 3 ,
  • Mark Petticrew 4 ,
  • Trish Greenhalgh 1 ,
  • Benjamin Hanckel 5 &
  • Sara Shaw 1  

BMC Medicine volume  18 , Article number:  301 ( 2020 ) Cite this article

17k Accesses

42 Citations

35 Altmetric

Metrics details

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

Peer Review reports

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 , 5 , 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 , 21 , 22 , 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Availability of data and materials

Not applicable (article based on existing available academic publications)

Abbreviations

Qualitative comparative analysis

Quasi-experimental design

Randomised controlled trial

Diez Roux AV. Complex systems thinking and current impasses in health disparities research. Am J Public Health. 2011;101(9):1627–34.

Article   Google Scholar  

Ogilvie D, Mitchell R, Mutrie N, M P, Platt S. Evaluating health effects of transport interventions: methodologic case study. Am J Prev Med 2006;31:118–126.

Walshe C. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies. Palliat Med. 2011;25(8):774–81.

Woolcock M. Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation. 2013;19:229–48.

Cartwright N. Are RCTs the gold standard? BioSocieties. 2007;2(1):11–20.

Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med. 2018;210:2–21.

Salway S, Green J. Towards a critical complex systems approach to public health. Crit Public Health. 2017;27(5):523–4.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Bonell C, Warren E, Fletcher A. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials. 2016;17:478.

Pallmann P, Bedding AW, Choodari-Oskooei B. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.

Curran G, Bauer M, Mittman B, Pyne J, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812 .

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015 [cited 2020 Jun 27];350. Available from: https://www.bmj.com/content/350/bmj.h1258 .

Evans RE, Craig P, Hoddinott P, Littlecott H, Moore L, Murphy S, et al. When and how do ‘effective’ interventions need to be adapted and/or re-evaluated in new contexts? The need for guidance. J Epidemiol Community Health. 2019;73(6):481–2.

Shoveller J. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10(1):37.

Rosengarten M, Savransky M. A careful biomedicine? Generalization and abstraction in RCTs. Crit Public Health. 2019;29(2):181–91.

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406.

Canguilhem G. The normal and the pathological. New York: Zone Books; 1991. (1949).

Google Scholar  

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

King G, Keohane RO, Verba S. Designing social inquiry: scientific inference in qualitative research: Princeton University Press; 1994.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Pt 2):1209.

CAS   PubMed   PubMed Central   Google Scholar  

Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016 [cited 2020 Jun 30];4(16). Available from: https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr04160#/abstract .

Craig P, Di Ruggiero E, Frohlich KL, E M, White M, Group CCGA. Taking account of context in population health intervention research: guidance for producers, users and funders of research. NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32.

Mahoney J. Strategies of causal inference in small-N analysis. Sociol Methods Res. 2000;4:387–424.

Turner S. Major system change: a management and organisational research perspective. In: Rosalind Raine, Ray Fitzpatrick, Helen Barratt, Gywn Bevan, Nick Black, Ruth Boaden, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016;4(16) 2016. https://doi.org/10.3310/hsdr04160.

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225.

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. 369 p.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12:219–45.

Tsoukas H. Craving for generality and small-N studies: a Wittgensteinian approach towards the epistemology of the particular in organization and management studies. Sage Handb Organ Res Methods. 2009:285–301.

Stake RE. The art of case study research. London: Sage Publications Ltd; 1995.

Mitchell JC. Typicality and the case study. Ethnographic research: A guide to general conduct. Vol. 238241. 1984.

Gerring J. What is a case study and what is it good for? Am Polit Sci Rev. 2004;98(2):341–54.

May C, Mort M, Williams T, F M, Gask L. Health technology assessment in its local contexts: studies of telehealthcare. Soc Sci Med 2003;57:697–710.

McGill E. Trading quality for relevance: non-health decision-makers’ use of evidence on the social determinants of health. BMJ Open. 2015;5(4):007053.

Greenhalgh T. We can’t be 100% sure face masks work – but that shouldn’t stop us wearing them | Trish Greenhalgh. The Guardian. 2020 [cited 2020 Jun 27]; Available from: https://www.theguardian.com/commentisfree/2020/jun/05/face-masks-coronavirus .

Hammersley M. So, what are case studies? In: What’s wrong with ethnography? New York: Routledge; 1992.

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11(1):100.

Luck L, Jackson D, Usher K. Case study: a bridge across the paradigms. Nurs Inq. 2006;13(2):103–9.

Yin RK. Case study research and applications: design and methods: Sage; 2017.

Hyett N, A K, Dickson-Swift V. Methodology or method? A critical review of qualitative case study reports. Int J Qual Stud Health Well-Being. 2014;9:23606.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. Bmj. 2016;352.

Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qual Inq. 2011;17(6):511–21.

Lincoln YS, Guba EG. Judging the quality of case study reports. Int J Qual Stud Educ. 1990;3(1):53–9.

Riley DS, Barber MS, Kienle GS, Aronson JK, Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. 2017;89:218–35.

Download references

Acknowledgements

Not applicable

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Author information

Authors and affiliations.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Sara Paparini, Chrysanthi Papoutsi, Trish Greenhalgh & Sara Shaw

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

School of Health Sciences, University of East Anglia, Norwich, UK

Jamie Murdoch

Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Mark Petticrew

Institute for Culture and Society, Western Sydney University, Penrith, Australia

Benjamin Hanckel

You can also search for this author in PubMed   Google Scholar

Contributions

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

Corresponding author

Correspondence to Sara Paparini .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Paparini, S., Green, J., Papoutsi, C. et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med 18 , 301 (2020). https://doi.org/10.1186/s12916-020-01777-6

Download citation

Received : 03 July 2020

Accepted : 07 September 2020

Published : 10 November 2020

DOI : https://doi.org/10.1186/s12916-020-01777-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • Case studies
  • Mixed-method
  • Public health
  • Health services research
  • Interventions

BMC Medicine

ISSN: 1741-7015

critical case study evaluation

15.7 Evaluation: Presentation and Analysis of Case Study

Learning outcomes.

By the end of this section, you will be able to:

  • Revise writing to follow the genre conventions of case studies.
  • Evaluate the effectiveness and quality of a case study report.

Case studies follow a structure of background and context , methods , findings , and analysis . Body paragraphs should have main points and concrete details. In addition, case studies are written in formal language with precise wording and with a specific purpose and audience (generally other professionals in the field) in mind. Case studies also adhere to the conventions of the discipline’s formatting guide ( APA Documentation and Format in this study). Compare your case study with the following rubric as a final check.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Section URL: https://openstax.org/books/writing-guide/pages/15-7-evaluation-presentation-and-analysis-of-case-study

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Psychology: Research and Review

  • Open access
  • Published: 19 March 2021

Appraising psychotherapy case studies in practice-based evidence: introducing Case Study Evaluation-tool (CaSE)

  • Greta Kaluzeviciute   ORCID: orcid.org/0000-0003-1197-177X 1  

Psicologia: Reflexão e Crítica volume  34 , Article number:  9 ( 2021 ) Cite this article

7214 Accesses

3 Citations

5 Altmetric

Metrics details

Systematic case studies are often placed at the low end of evidence-based practice (EBP) due to lack of critical appraisal. This paper seeks to attend to this research gap by introducing a novel Case Study Evaluation-tool (CaSE). First, issues around knowledge generation and validity are assessed in both EBP and practice-based evidence (PBE) paradigms. Although systematic case studies are more aligned with PBE paradigm, the paper argues for a complimentary, third way approach between the two paradigms and their ‘exemplary’ methodologies: case studies and randomised controlled trials (RCTs). Second, the paper argues that all forms of research can produce ‘valid evidence’ but the validity itself needs to be assessed against each specific research method and purpose. Existing appraisal tools for qualitative research (JBI, CASP, ETQS) are shown to have limited relevance for the appraisal of systematic case studies through a comparative tool assessment. Third, the paper develops purpose-oriented evaluation criteria for systematic case studies through CaSE Checklist for Essential Components in Systematic Case Studies and CaSE Purpose-based Evaluative Framework for Systematic Case Studies. The checklist approach aids reviewers in assessing the presence or absence of essential case study components (internal validity). The framework approach aims to assess the effectiveness of each case against its set out research objectives and aims (external validity), based on different systematic case study purposes in psychotherapy. Finally, the paper demonstrates the application of the tool with a case example and notes further research trajectories for the development of CaSE tool.

Introduction

Due to growing demands of evidence-based practice, standardised research assessment and appraisal tools have become common in healthcare and clinical treatment (Hannes, Lockwood, & Pearson, 2010 ; Hartling, Chisholm, Thomson, & Dryden, 2012 ; Katrak, Bialocerkowski, Massy-Westropp, Kumar, & Grimmer, 2004 ). This allows researchers to critically appraise research findings on the basis of their validity, results, and usefulness (Hill & Spittlehouse, 2003 ). Despite the upsurge of critical appraisal in qualitative research (Williams, Boylan, & Nunan, 2019 ), there are no assessment or appraisal tools designed for psychotherapy case studies.

Although not without controversies (Michels, 2000 ), case studies remain central to the investigation of psychotherapy processes (Midgley, 2006 ; Willemsen, Della Rosa, & Kegerreis, 2017 ). This is particularly true of systematic case studies, the most common form of case study in contemporary psychotherapy research (Davison & Lazarus, 2007 ; McLeod & Elliott, 2011 ).

Unlike the classic clinical case study, systematic cases usually involve a team of researchers, who gather data from multiple different sources (e.g., questionnaires, observations by the therapist, interviews, statistical findings, clinical assessment, etc.), and involve a rigorous data triangulation process to assess whether the data from different sources converge (McLeod, 2010 ). Since systematic case studies are methodologically pluralistic, they have a greater interest in situating patients within the study of a broader population than clinical case studies (Iwakabe & Gazzola, 2009 ). Systematic case studies are considered to be an accessible method for developing research evidence-base in psychotherapy (Widdowson, 2011 ), especially since they correct some of the methodological limitations (e.g. lack of ‘third party’ perspectives and bias in data analysis) inherent to classic clinical case studies (Iwakabe & Gazzola, 2009 ). They have been used for the purposes of clinical training (Tuckett, 2008 ), outcome assessment (Hilliard, 1993 ), development of clinical techniques (Almond, 2004 ) and meta-analysis of qualitative findings (Timulak, 2009 ). All these developments signal a revived interest in the case study method, but also point to the obvious lack of a research assessment tool suitable for case studies in psychotherapy (Table 1 ).

To attend to this research gap, this paper first reviews issues around the conceptualisation of validity within the paradigms of evidence-based practice (EBP) and practice-based evidence (PBE). Although case studies are often positioned at the low end of EBP (Aveline, 2005 ), the paper suggests that systematic cases are a valuable form of evidence, capable of complimenting large-scale studies such as randomised controlled trials (RCTs). However, there remains a difficulty in assessing the quality and relevance of case study findings to broader psychotherapy research.

As a way forward, the paper introduces a novel Case Study Evaluation-tool (CaSE) in the form of CaSE Purpose - based Evaluative Framework for Systematic Case Studies and CaSE Checklist for Essential Components in Systematic Case Studies . The long-term development of CaSE would contribute to psychotherapy research and practice in three ways.

Given the significance of methodological pluralism and diverse research aims in systematic case studies, CaSE will not seek to prescribe explicit case study writing guidelines, which has already been done by numerous authors (McLeod, 2010 ; Meganck, Inslegers, Krivzov, & Notaerts, 2017 ; Willemsen et al., 2017 ). Instead, CaSE will enable the retrospective assessment of systematic case study findings and their relevance (or lack thereof) to broader psychotherapy research and practice. However, there is no reason to assume that CaSE cannot be used prospectively (i.e. producing systematic case studies in accordance to CaSE evaluative framework, as per point 3 in Table 2 ).

The development of a research assessment or appraisal tool is a lengthy, ongoing process (Long & Godfrey, 2004 ). It is particularly challenging to develop a comprehensive purpose - oriented evaluative framework, suitable for the assessment of diverse methodologies, aims and outcomes. As such, this paper should be treated as an introduction to the broader development of CaSE tool. It will introduce the rationale behind CaSE and lay out its main approach to evidence and evaluation, with further development in mind. A case example from the Single Case Archive (SCA) ( https://singlecasearchive.com ) will be used to demonstrate the application of the tool ‘in action’. The paper notes further research trajectories and discusses some of the limitations around the use of the tool.

Separating the wheat from the chaff: what is and is not evidence in psychotherapy (and who gets to decide?)

The common approach: evidence-based practice.

In the last two decades, psychotherapy has become increasingly centred around the idea of an evidence-based practice (EBP). Initially introduced in medicine, EBP has been defined as ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996 ). EBP revolves around efficacy research: it seeks to examine whether a specific intervention has a causal (in this case, measurable) effect on clinical populations (Barkham & Mellor-Clark, 2003 ). From a conceptual standpoint, Sackett and colleagues defined EBP as a paradigm that is inclusive of many methodologies, so long as they contribute towards clinical decision-making process and accumulation of best currently available evidence in any given set of circumstances (Gabbay & le May, 2011 ). Similarly, the American Psychological Association (APA, 2010 ) has recently issued calls for evidence-based systematic case studies in order to produce standardised measures for evaluating process and outcome data across different therapeutic modalities.

However, given EBP’s focus on establishing cause-and-effect relationships (Rosqvist, Thomas, & Truax, 2011 ), it is unsurprising that qualitative research is generally not considered to be ‘gold standard’ or ‘efficacious’ within this paradigm (Aveline, 2005 ; Cartwright & Hardie, 2012 ; Edwards, 2013 ; Edwards, Dattilio, & Bromley, 2004 ; Longhofer, Floersch, & Hartmann, 2017 ). Qualitative methods like systematic case studies maintain an appreciation for context, complexity and meaning making. Therefore, instead of measuring regularly occurring causal relations (as in quantitative studies), the focus is on studying complex social phenomena (e.g. relationships, events, experiences, feelings, etc.) (Erickson, 2012 ; Maxwell, 2004 ). Edwards ( 2013 ) points out that, although context-based research in systematic case studies is the bedrock of psychotherapy theory and practice, it has also become shrouded by an unfortunate ideological description: ‘anecdotal’ case studies (i.e. unscientific narratives lacking evidence, as opposed to ‘gold standard’ evidence, a term often used to describe the RCT method and the therapeutic modalities supported by it), leading to a further need for advocacy in and defence of the unique epistemic process involved in case study research (Fishman, Messer, Edwards, & Dattilio, 2017 ).

The EBP paradigm prioritises the quantitative approach to causality, most notably through its focus on high generalisability and the ability to deal with bias through randomisation process. These conditions are associated with randomised controlled trials (RCTs) but are limited (or, as some argue, impossible) in qualitative research methods such as the case study (Margison et al., 2000 ) (Table 3 ).

‘Evidence’ from an EBP standpoint hovers over the epistemological assumption of procedural objectivity : knowledge can be generated in a standardised, non-erroneous way, thus producing objective (i.e. with minimised bias) data. This can be achieved by anyone, as long as they are able to perform the methodological procedure (e.g. RCT) appropriately, in a ‘clearly defined and accepted process that assists with knowledge production’ (Douglas, 2004 , p. 131). If there is a well-outlined quantitative form for knowledge production, the same outcome should be achieved regardless of who processes or interprets the information. For example, researchers using Cochrane Review assess the strength of evidence using meticulously controlled and scrupulous techniques; in turn, this minimises individual judgment and creates unanimity of outcomes across different groups of people (Gabbay & le May, 2011 ). The typical process of knowledge generation (through employing RCTs and procedural objectivity) in EBP is demonstrated in Fig. 1 .

figure 1

Typical knowledge generation process in evidence–based practice (EBP)

In EBP, the concept of validity remains somewhat controversial, with many critics stating that it limits rather than strengthens knowledge generation (Berg, 2019 ; Berg & Slaattelid, 2017 ; Lilienfeld, Ritschel, Lynn, Cautin, & Latzman, 2013 ). This is because efficacy research relies on internal validity . At a general level, this concept refers to the congruence between the research study and the research findings (i.e. the research findings were not influenced by anything external to the study, such as confounding variables, methodological errors and bias); at a more specific level, internal validity determines the extent to which a study establishes a reliable causal relationship between an independent variable (e.g. treatment) and independent variable (outcome or effect) (Margison et al., 2000 ). This approach to validity is demonstrated in Fig. 2 .

figure 2

Internal validity

Social scientists have argued that there is a trade-off between research rigour and generalisability: the more specific the sample and the more rigorously defined the intervention, the outcome is likely to be less applicable to everyday, routine practice. As such, there remains a tension between employing procedural objectivity which increases the rigour of research outcomes and applying such outcomes to routine psychotherapy practice where scientific standards of evidence are not uniform.

According to McLeod ( 2002 ), inability to address questions that are most relevant for practitioners contributed to a deepening research–practice divide in psychotherapy. Studies investigating how practitioners make clinical decisions and the kinds of evidence they refer to show that there is a strong preference for knowledge that is not generated procedurally, i.e. knowledge that encompasses concrete clinical situations, experiences and techniques. A study by Stewart and Chambless ( 2007 ) sought to assess how a larger population of clinicians (under APA, from varying clinical schools of thought and independent practices, sample size 591) make treatment decisions in private practice. The study found that large-scale statistical data was not the primary source of information sought by clinicians. The most important influences were identified as past clinical experiences and clinical expertise ( M = 5.62). Treatment materials based on clinical case observations and theory ( M = 4.72) were used almost as frequently as psychotherapy outcome research findings ( M = 4.80) (i.e. evidence-based research). These numbers are likely to fluctuate across different forms of psychotherapy; however, they are indicative of the need for research about routine clinical settings that does not isolate or generalise the effect of an intervention but examines the variations in psychotherapy processes.

The alternative approach: practice-based evidence

In an attempt to dissolve or lessen the research–practice divide, an alternative paradigm of practice-based evidence (PBE) has been suggested (Barkham & Mellor-Clark, 2003 ; Fox, 2003 ; Green & Latchford, 2012 ; Iwakabe & Gazzola, 2009 ; Laska, Motulsky, Wertz, Morrow, & Ponterotto, 2014 ; Margison et al., 2000 ). PBE represents a shift in how we think about evidence and knowledge generation in psychotherapy. PBE treats research as a local and contingent process (at least initially), which means it focuses on variations (e.g. in patient symptoms) and complexities (e.g. of clinical setting) in the studied phenomena (Fox, 2003 ). Moreover, research and theory-building are seen as complementary rather than detached activities from clinical practice. That is to say, PBE seeks to examine how and which treatments can be improved in everyday clinical practice by flagging up clinically salient issues and developing clinical techniques (Barkham & Mellor-Clark, 2003 ). For this reason, PBE is concerned with the effectiveness of research findings: it evaluates how well interventions work in real-world settings (Rosqvist et al., 2011 ). Therefore, although it is not unlikely for RCTs to be used in order to generate practice-informed evidence (Horn & Gassaway, 2007 ), qualitative methods like the systematic case study are seen as ideal for demonstrating the effectiveness of therapeutic interventions with individual patients (van Hennik, 2020 ) (Table 4 ).

PBE’s epistemological approach to ‘evidence’ may be understood through the process of concordant objectivity (Douglas, 2004 ): ‘Instead of seeking to eliminate individual judgment, … [concordant objectivity] checks to see whether the individual judgments of people in fact do agree’ (p. 462). This does not mean that anyone can contribute to the evaluation process like in procedural objectivity, where the main criterion is following a set quantitative protocol or knowing how to operate a specific research design. Concordant objectivity requires that there is a set of competent observers who are closely familiar with the studied phenomenon (e.g. researchers and practitioners who are familiar with depression from a variety of therapeutic approaches).

Systematic case studies are a good example of PBE ‘in action’: they allow for the examination of detailed unfolding of events in psychotherapy practice, making it the most pragmatic and practice-oriented form of psychotherapy research (Fishman, 1999 , 2005 ). Furthermore, systematic case studies approach evidence and results through concordant objectivity (Douglas, 2004 ) by involving a team of researchers and rigorous data triangulation processes (McLeod, 2010 ). This means that, although systematic case studies remain focused on particular clinical situations and detailed subjective experiences (similar to classic clinical case studies; see Iwakabe & Gazzola, 2009 ), they still involve a series of validity checks and considerations on how findings from a single systematic case pertain to broader psychotherapy research (Fishman, 2005 ). The typical process of knowledge generation (through employing systematic case studies and concordant objectivity) in PBE is demonstrated in Fig. 3 . The figure exemplifies a bidirectional approach to research and practice, which includes the development of research-supported psychological treatments (through systematic reviews of existing evidence) as well as the perspectives of clinical practitioners in the research process (through the study of local and contingent patient and/or treatment processes) (Teachman et al., 2012 ; Westen, Novotny, & Thompson-Brenner, 2004 ).

figure 3

Typical knowledge generation process in practice-based evidence (PBE)

From a PBE standpoint, external validity is a desirable research condition: it measures extent to which the impact of interventions apply to real patients and therapists in everyday clinical settings. As such, external validity is not based on the strength of causal relationships between treatment interventions and outcomes (as in internal validity); instead, the use of specific therapeutic techniques and problem-solving decisions are considered to be important for generalising findings onto routine clinical practice (even if the findings are explicated from a single case study; see Aveline, 2005 ). This approach to validity is demonstrated in Fig. 4 .

figure 4

External validity

Since effectiveness research is less focused on limiting the context of the studied phenomenon (indeed, explicating the context is often one of the research aims), there is more potential for confounding factors (e.g. bias and uncontrolled variables) which in turn can reduce the study’s internal validity (Barkham & Mellor-Clark, 2003 ). This is also an important challenge for research appraisal. Douglas ( 2004 ) argues that appraising research in terms of its effectiveness may produce significant disagreements or group illusions, since what might work for some practitioners may not work for others: ‘It cannot guarantee that values are not influencing or supplanting reasoning; the observers may have shared values that cause them to all disregard important aspects of an event’ (Douglas, 2004 , p. 462). Douglas further proposes that an interactive approach to objectivity may be employed as a more complex process in debating the evidential quality of a research study: it requires a discussion among observers and evaluators in the form of peer-review, scientific discourse, as well as research appraisal tools and instruments. While these processes of rigour are also applied in EBP, there appears to be much more space for debate, disagreement and interpretation in PBE’s approach to research evaluation, partly because the evaluation criteria themselves are subject of methodological debate and are often employed in different ways by researchers (Williams et al., 2019 ). This issue will be addressed more explicitly again in relation to CaSE development (‘Developing purpose-oriented evaluation criteria for systematic case studies’ section).

A third way approach to validity and evidence

The research–practice divide shows us that there may be something significant in establishing complementarity between EBP and PBE rather than treating them as mutually exclusive forms of research (Fishman et al., 2017 ). For one, EBP is not a sufficient condition for delivering research relevant to practice settings (Bower, 2003 ). While RCTs can demonstrate that an intervention works on average in a group, clinicians who are facing individual patients need to answer a different question: how can I make therapy work with this particular case ? (Cartwright & Hardie, 2012 ). Systematic case studies are ideal for filling this gap: they contain descriptions of microprocesses (e.g. patient symptoms, therapeutic relationships, therapist attitudes) in psychotherapy practice that are often overlooked in large-scale RCTs (Iwakabe & Gazzola, 2009 ). In particular, systematic case studies describing the use of specific interventions with less researched psychological conditions (e.g. childhood depression or complex post-traumatic stress disorder) can deepen practitioners’ understanding of effective clinical techniques before the results of large-scale outcome studies are disseminated.

Secondly, establishing a working relationship between systematic case studies and RCTs will contribute towards a more pragmatic understanding of validity in psychotherapy research. Indeed, the very tension and so-called trade-off between internal and external validity is based on the assumption that research methods are designed on an either/or basis; either they provide a sufficiently rigorous study design or they produce findings that can be applied to real-life practice. Jimenez-Buedo and Miller ( 2010 ) call this assumption into question: in their view, if a study is not internally valid, then ‘little, or rather nothing, can be said of the outside world’ (p. 302). In this sense, internal validity may be seen as a pre-requisite for any form of applied research and its external validity, but it need not be constrained to the quantitative approach of causality. For example, Levitt, Motulsky, Wertz, Morrow, and Ponterotto ( 2017 ) argue that, what is typically conceptualised as internal validity, is, in fact, a much broader construct, involving the assessment of how the research method (whether qualitative or quantitative) is best suited for the research goal, and whether it obtains the relevant conclusions. Similarly, Truijens, Cornelis, Desmet, and De Smet ( 2019 ) suggest that we should think about validity in a broader epistemic sense—not just in terms of psychometric measures, but also in terms of the research design, procedure, goals (research questions), approaches to inquiry (paradigms, epistemological assumptions), etc.

The overarching argument from research cited above is that all forms of research—qualitative and quantitative—can produce ‘valid evidence’ but the validity itself needs to be assessed against each specific research method and purpose. For example, RCTs are accompanied with a variety of clearly outlined appraisal tools and instruments such as CASP (Critical Appraisal Skills Programme) that are well suited for the assessment of RCT validity and their implications for EBP. Systematic case studies (or case studies more generally) currently have no appraisal tools in any discipline. The next section evaluates whether existing qualitative research appraisal tools are relevant for systematic case studies in psychotherapy and specifies the missing evaluative criteria.

The relevance of existing appraisal tools for qualitative research to systematic case studies in psychotherapy

What is a research tool.

Currently, there are several research appraisal tools, checklists and frameworks for qualitative studies. It is important to note that tools, checklists and frameworks are not equivalent to one another but actually refer to different approaches to appraising the validity of a research study. As such, it is erroneous to assume that all forms of qualitative appraisal feature the same aims and methods (Hannes et al., 2010 ; Williams et al., 2019 ).

Generally, research assessment falls into two categories: checklists and frameworks . Checklist approaches are often contrasted with quantitative research, since the focus is on assessing the internal validity of research (i.e. researcher’s independence from the study). This involves the assessment of bias in sampling, participant recruitment, data collection and analysis. Framework approaches to research appraisal, on the other hand, revolve around traditional qualitative concepts such as transparency, reflexivity, dependability and transferability (Williams et al., 2019 ). Framework approaches to appraisal are often challenging to use because they depend on the reviewer’s familiarisation and interpretation of the qualitative concepts.

Because of these different approaches, there is some ambiguity in terminology, particularly between research appraisal instruments and research appraisal tools . These terms are often used interchangeably in appraisal literature (Williams et al., 2019 ). In this paper, research appraisal tool is defined as a method-specific (i.e. it identifies a specific research method or component) form of appraisal that draws from both checklist and framework approaches. Furthermore, a research appraisal tool seeks to inform decision making in EBP or PBE paradigms and provides explicit definitions of the tool’s evaluative framework (thus minimising—but by no means eliminating—the reviewers’ interpretation of the tool). This definition will be applied to CaSE (Table 5 ).

In contrast, research appraisal instruments are generally seen as a broader form of appraisal in the sense that they may evaluate a variety of methods (i.e. they are non-method specific or they do not target a particular research component), and are aimed at checking whether the research findings and/or the study design contain specific elements (e.g. the aims of research, the rationale behind design methodology, participant recruitment strategies, etc.).

There is often an implicit difference in audience between appraisal tools and instruments. Research appraisal instruments are often aimed at researchers who want to assess the strength of their study; however, the process of appraisal may not be made explicit in the study itself (besides mentioning that the tool was used to appraise the study). Research appraisal tools are aimed at researchers who wish to explicitly demonstrate the evidential quality of the study to the readers (which is particularly common in RCTs). All forms of appraisal used in the comparative exercise below are defined as ‘tools’, even though they have different appraisal approaches and aims.

Comparing different qualitative tools

Hannes et al. ( 2010 ) identified CASP (Critical Appraisal Skills Programme-tool), JBI (Joanna Briggs Institute-tool) and ETQS (Evaluation Tool for Qualitative Studies) as the most frequently used critical appraisal tools by qualitative researchers. All three instruments are available online and are free of charge, which means that any researcher or reviewer can readily utilise CASP, JBI or ETQS evaluative frameworks to their research. Furthermore, all three instruments were developed within the context of organisational, institutional or consortium support (Tables 6 , 7 and 8 ).

It is important to note that neither of the three tools is specific to systematic case studies or psychotherapy case studies (which would include not only systematic but also experimental and clinical cases). This means that using CASP, JBI or ETQS for case study appraisal may come at a cost of overlooking elements and components specific to the systematic case study method.

Based on Hannes et al. ( 2010 ) comparative study of qualitative appraisal tools as well as the different evaluation criteria explicated in CASP, JBI and ETQS evaluative frameworks, I assessed how well each of the three tools is attuned to the methodological , clinical and theoretical aspects of systematic case studies in psychotherapy. The latter components were based on case study guidelines featured in the journal of Pragmatic Case Studies in Psychotherapy as well as components commonly used by published systematic case studies across a variety of other psychotherapy journals (e.g. Psychotherapy Research , Research In Psychotherapy : Psychopathology Process And Outcome , etc.) (see Table 9 for detailed descriptions of each component).

The evaluation criteria for each tool in Table 9 follows Joanna Briggs Institute (JBI) ( 2017a , 2017b ); Critical Appraisal Skills Programme (CASP) ( 2018 ); and ETQS Questionnaire (first published in 2004 but revised continuously since). Table 10 demonstrates how each tool should be used (i.e. recommended reviewer responses to checklists and questionnaires).

Using CASP, JBI and ETQS for systematic case study appraisal

Although JBI, CASP and ETQS were all developed to appraise qualitative research, it is evident from the above comparison that there are significant differences between the three tools. For example, JBI and ETQS are well suited to assess researcher’s interpretations (Hannes et al. ( 2010 ) defined this as interpretive validity , a subcategory of internal validity ): the researcher’s ability to portray, understand and reflect on the research participants’ experiences, thoughts, viewpoints and intentions. JBI has an explicit requirement for participant voices to be clearly represented, whereas ETQS involves a set of questions about key characteristics of events, persons, times and settings that are relevant to the study. Furthermore, both JBI and ETQS seek to assess the researcher’s influence on the research, with ETQS particularly focusing on the evaluation of reflexivity (the researcher’s personal influence on the interpretation and collection of data). These elements are absent or addressed to a lesser extent in the CASP tool.

The appraisal of transferability of findings (what this paper previously referred to as external validity ) is addressed only by ETQS and CASP. Both tools have detailed questions about the value of research to practice and policy as well as its transferability to other populations and settings. Methodological research aspects are also extensively addressed by CASP and ETQS, but less so by JBI (which relies predominantly on congruity between research methodology and objectives without any particular assessment criteria for other data sources and/or data collection methods). Finally, the evaluation of theoretical aspects (referred to by Hannes et al. ( 2010 ) as theoretical validity ) is addressed only by JBI and ETQS; there are no assessment criteria for theoretical framework in CASP.

Given these differences, it is unsurprising that CASP, JBI and ETQS have limited relevance for systematic case studies in psychotherapy. First, it is evident that neither of the three tools has specific evaluative criteria for the clinical component of systematic case studies. Although JBI and ETQS feature some relevant questions about participants and their context, the conceptualisation of patients (and/or clients) in psychotherapy involves other kinds of data elements (e.g. diagnostic tools and questionnaires as well as therapist observations) that go beyond the usual participant data. Furthermore, much of the clinical data is intertwined with the therapist’s clinical decision-making and thinking style (Kaluzeviciute & Willemsen, 2020 ). As such, there is a need to appraise patient data and therapist interpretations not only on a separate basis, but also as two forms of knowledge that are deeply intertwined in the case narrative.

Secondly, since systematic case studies involve various forms of data, there is a need to appraise how these data converge (or how different methods complement one another in the case context) and how they can be transferred or applied in broader psychotherapy research and practice. These systematic case study components are attended to a degree by CASP (which is particularly attentive of methodological components) and ETQS (particularly specific criteria for research transferability onto policy and practice). These components are not addressed or less explicitly addressed by JBI. Overall, neither of the tools is attuned to all methodological, theoretical and clinical components of the systematic case study. Specifically, there are no clear evaluation criteria for the description of research teams (i.e. different data analysts and/or clinicians); the suitability of the systematic case study method; the description of patient’s clinical assessment; the use of other methods or data sources; the general data about therapeutic progress.

Finally, there is something to be said about the recommended reviewer responses (Table 10 ). Systematic case studies can vary significantly in their formulation and purpose. The methodological, theoretical and clinical components outlined in Table 9 follow guidelines made by case study journals; however, these are recommendations, not ‘set in stone’ case templates. For this reason, the straightforward checklist approaches adopted by JBI and CASP may be difficult to use for case study researchers and those reviewing case study research. The ETQS open-ended questionnaire approach suggested by Long and Godfrey ( 2004 ) enables a comprehensive, detailed and purpose-oriented assessment, suitable for the evaluation of systematic case studies. That said, there remains a challenge of ensuring that there is less space for the interpretation of evaluative criteria (Williams et al., 2019 ). The combination of checklist and framework approaches would, therefore, provide a more stable appraisal process across different reviewers.

Developing purpose-oriented evaluation criteria for systematic case studies

The starting point in developing evaluation criteria for Case Study Evaluation-tool (CaSE) is addressing the significance of pluralism in systematic case studies. Unlike RCTs, systematic case studies are pluralistic in the sense that they employ divergent practices in methodological procedures ( research process ), and they may include significantly different research aims and purpose ( the end - goal ) (Kaluzeviciute & Willemsen, 2020 ). While some systematic case studies will have an explicit intention to conceptualise and situate a single patient’s experiences and symptoms within a broader clinical population, others will focus on the exploration of phenomena as they emerge from the data. It is therefore important that CaSE is positioned within a purpose - oriented evaluative framework , suitable for the assessment of what each systematic case is good for (rather than determining an absolute measure of ‘good’ and ‘bad’ systematic case studies). This approach to evidence and appraisal is in line with the PBE paradigm. PBE emphasises the study of clinical complexities and variations through local and contingent settings (e.g. single case studies) and promotes methodological pluralism (Barkham & Mellor-Clark, 2003 ).

CaSE checklist for essential components in systematic case studies

In order to conceptualise purpose-oriented appraisal questions, we must first look at what unites and differentiates systematic case studies in psychotherapy. The commonly used theoretical, clinical and methodological systematic case study components were identified earlier in Table 9 . These components will be seen as essential and common to most systematic case studies in CaSE evaluative criteria. If these essential components are missing in a systematic case study, then it may be implied there is a lack of information, which in turn diminishes the evidential quality of the case. As such, the checklist serves as a tool for checking whether a case study is, indeed, systematic (as opposed to experimental or clinical; see Iwakabe & Gazzola, 2009 for further differentiation between methodologically distinct case study types) and should be used before CaSE Purpose - based Evaluative Framework for Systematic Case Studie s (which is designed for the appraisal of different purposes common to systematic case studies).

As noted earlier in the paper, checklist approaches to appraisal are useful when evaluating the presence or absence of specific information in a research study. This approach can be used to appraise essential components in systematic case studies, as shown below. From a pragmatic point view (Levitt et al., 2017 ; Truijens et al., 2019 ), CaSE Checklist for Essential Components in Systematic Case Studies can be seen as a way to ensure the internal validity of systematic case study: the reviewer is assessing whether sufficient information is provided about the case design, procedure, approaches to inquiry, etc., and whether they are relevant to the researcher’s objectives and conclusions (Table 11 ).

CaSE purpose-based evaluative framework for systematic case studies

Identifying differences between systematic case studies means identifying the different purposes systematic case studies have in psychotherapy. Based on the earlier work by social scientist Yin ( 1984 , 1993 ), we can differentiate between exploratory (hypothesis generating, indicating a beginning phase of research), descriptive (particularising case data as it emerges) and representative (a case that is typical of a broader clinical population, referred to as the ‘explanatory case’ by Yin) cases.

Another increasingly significant strand of systematic case studies is transferable (aggregating and transferring case study findings) cases. These cases are based on the process of meta-synthesis (Iwakabe & Gazzola, 2009 ): by examining processes and outcomes in many different case studies dealing with similar clinical issues, researchers can identify common themes and inferences. In this way, single case studies that have relatively little impact on clinical practice, research or health care policy (in the sense that they capture psychotherapy processes rather than produce generalisable claims as in Yin’s representative case studies) can contribute to the generation of a wider knowledge base in psychotherapy (Iwakabe, 2003 , 2005 ). However, there is an ongoing issue of assessing the evidential quality of such transferable cases. According to Duncan and Sparks ( 2020 ), although meta-synthesis and meta-analysis are considered to be ‘gold standard’ for assessing interventions across disparate studies in psychotherapy, they often contain case studies with significant research limitations, inappropriate interpretations and insufficient information. It is therefore important to have a research appraisal process in place for selecting transferable case studies.

Two other types of systematic case study research include: critical (testing and/or confirming existing theories) cases, which are described as an excellent method for falsifying existing theoretical concepts and testing whether therapeutic interventions work in practice with concrete patients (Kaluzeviciute, 2021 ), and unique (going beyond the ‘typical’ cases and demonstrating deviations) cases (Merriam, 1998 ). These two systematic case study types are often seen as less valuable for psychotherapy research given that unique/falsificatory findings are difficult to generalise. But it is clear that practitioners and researchers in our field seek out context-specific data, as well as detailed information on the effectiveness of therapeutic techniques in single cases (Stiles, 2007 ) (Table 12 ).

Each purpose-based case study contributes to PBE in different ways. Representative cases provide qualitatively rich, in-depth data about a clinical phenomenon within its particular context. This offers other clinicians and researchers access to a ‘closed world’ (Mackrill & Iwakabe, 2013 ) containing a wide range of attributes about a conceptual type (e.g. clinical condition or therapeutic technique). Descriptive cases generally seek to demonstrate a realistic snapshot of therapeutic processes, including complex dynamics in therapeutic relationships, and instances of therapeutic failure (Maggio, Molgora, & Oasi, 2019 ). Data in descriptive cases should be presented in a transparent manner (e.g. if there are issues in standardising patient responses to a self-report questionnaire, this should be made explicit). Descriptive cases are commonly used in psychotherapy training and supervision. Unique cases are relevant for both clinicians and researchers: they often contain novel treatment approaches and/or introduce new diagnostic considerations about patients who deviate from the clinical population. Critical cases demonstrate the application of psychological theories ‘in action’ with particular patients; as such, they are relevant to clinicians, researchers and policymakers (Mackrill & Iwakabe, 2013 ). Exploratory cases bring new insight and observations into clinical practice and research. This is particularly useful when comparing (or introducing) different clinical approaches and techniques (Trad & Raine, 1994 ). Findings from exploratory cases often include future research suggestions. Finally, transferable cases provide one solution to the generalisation issue in psychotherapy research through the previously mentioned process of meta-synthesis. Grouped together, transferable cases can contribute to theory building and development, as well as higher levels of abstraction about a chosen area of psychotherapy research (Iwakabe & Gazzola, 2009 ).

With this plurality in mind, it is evident that CaSE has a challenging task of appraising research components that are distinct across six different types of purpose-based systematic case studies. The purpose-specific evaluative criteria in Table 13 was developed in close consultation with epistemological literature associated with each type of case study, including: Yin’s ( 1984 , 1993 ) work on establishing the typicality of representative cases; Duncan and Sparks’ ( 2020 ) and Iwakabe and Gazzola’s ( 2009 ) case selection criteria for meta-synthesis and meta-analysis; Stake’s ( 1995 , 2010 ) research on particularising case narratives; Merriam’s ( 1998 ) guidelines on distinctive attributes of unique case studies; Kennedy’s ( 1979 ) epistemological rules for generalising from case studies; Mahrer’s ( 1988 ) discovery oriented case study approach; and Edelson’s ( 1986 ) guidelines for rigorous hypothesis generation in case studies.

Research on epistemic issues in case writing (Kaluzeviciute, 2021 ) and different forms of scientific thinking in psychoanalytic case studies (Kaluzeviciute & Willemsen, 2020 ) was also utilised to identify case study components that would help improve therapist clinical decision-making and reflexivity.

For the analysis of more complex research components (e.g. the degree of therapist reflexivity), the purpose-based evaluation will utilise a framework approach, in line with comprehensive and open-ended reviewer responses in ETQS (Evaluation Tool for Qualitative Studies) (Long & Godfrey, 2004 ) (Table 13 ). That is to say, the evaluation here is not so much about the presence or absence of information (as in the checklist approach) but the degree to which the information helps the case with its unique purpose, whether it is generalisability or typicality. Therefore, although the purpose-oriented evaluation criteria below encompasses comprehensive questions at a considerable level of generality (in the sense that not all components may be required or relevant for each case study), it nevertheless seeks to engage with each type of purpose-based systematic case study on an individual basis (attending to research or clinical components that are unique to each of type of case study).

It is important to note that, as this is an introductory paper to CaSE, the evaluative framework is still preliminary: it involves some of the core questions that pertain to the nature of all six purpose-based systematic case studies. However, there is a need to develop a more comprehensive and detailed CaSE appraisal framework for each purpose-based systematic case study in the future.

Using CaSE on published systematic case studies in psychotherapy: an example

To illustrate the use of CaSE Purpose - based Evaluative Framework for Systematic Case Studies , a case study by Lunn, Daniel, and Poulsen ( 2016 ) titled ‘ Psychoanalytic Psychotherapy With a Client With Bulimia Nervosa ’ was selected from the Single Case Archive (SCA) and analysed in Table 14 . Based on the core questions associated with the six purpose-based systematic case study types in Table 13 (1 to 6), the purpose of Lunn et al.’s ( 2016 ) case was identified as critical (testing an existing theoretical suggestion).

Sometimes, case study authors will explicitly define the purpose of their case in the form of research objectives (as was the case in Lunn et al.’s study); this helps identifying which purpose-based questions are most relevant for the evaluation of the case. However, some case studies will require comprehensive analysis in order to identify their purpose (or multiple purposes). As such, it is recommended that CaSE reviewers first assess the degree and manner in which information about the studied phenomenon, patient data, clinical discourse and research are presented before deciding on the case purpose.

Although each purpose-based systematic case study will contribute to different strands of psychotherapy (theory, practice, training, etc.) and focus on different forms of data (e.g. theory testing vs extensive clinical descriptions), the overarching aim across all systematic case studies in psychotherapy is to study local and contingent processes, such as variations in patient symptoms and complexities of the clinical setting. The comprehensive framework approach will therefore allow reviewers to assess the degree of external validity in systematic case studies (Barkham & Mellor-Clark, 2003 ). Furthermore, assessing the case against its purpose will let reviewers determine whether the case achieves its set goals (research objectives and aims). The example below shows that Lunn et al.’s ( 2016 ) case is successful in functioning as a critical case as the authors provide relevant, high-quality information about their tested therapeutic conditions.

Finally, it is also possible to use CaSE to gather specific type of systematic case studies for one’s research, practice, training, etc. For example, a CaSE reviewer might want to identify as many descriptive case studies focusing on negative therapeutic relationships as possible for their clinical supervision. The reviewer will therefore only need to refer to CaSE questions in Table 13 (2) on descriptive cases. If the reviewed cases do not align with the questions in Table 13 (2), then they are not suitable for the CaSE reviewer who is looking for “know-how” knowledge and detailed clinical narratives.

Concluding comments

This paper introduces a novel Case Study Evaluation-tool (CaSE) for systematic case studies in psychotherapy. Unlike most appraisal tools in EBP, CaSE is positioned within purpose-oriented evaluation criteria, in line with the PBE paradigm. CaSE enables reviewers to assess what each systematic case is good for (rather than determining an absolute measure of ‘good’ and ‘bad’ systematic case studies). In order to explicate a purpose-based evaluative framework, six different systematic case study purposes in psychotherapy have been identified: representative cases (purpose: typicality), descriptive cases (purpose: particularity), unique cases (purpose: deviation), critical cases (purpose: falsification/confirmation), exploratory cases (purpose: hypothesis generation) and transferable cases (purpose: generalisability). Each case was linked with an existing epistemological network, such as Iwakabe and Gazzola’s ( 2009 ) work on case selection criteria for meta-synthesis. The framework approach includes core questions specific to each purpose-based case study (Table 13 (1–6)). The aim is to assess the external validity and effectiveness of each case study against its set out research objectives and aims. Reviewers are required to perform a comprehensive and open-ended data analysis, as shown in the example in Table 14 .

Along with CaSE Purpose - based Evaluative Framework (Table 13 ), the paper also developed CaSE Checklist for Essential Components in Systematic Case Studies (Table 12 ). The checklist approach is meant to aid reviewers in assessing the presence or absence of essential case study components, such as the rationale behind choosing the case study method and description of patient’s history. If essential components are missing in a systematic case study, then it may be implied that there is a lack of information, which in turn diminishes the evidential quality of the case. Following broader definitions of validity set out by Levitt et al. ( 2017 ) and Truijens et al. ( 2019 ), it could be argued that the checklist approach allows for the assessment of (non-quantitative) internal validity in systematic case studies: does the researcher provide sufficient information about the case study design, rationale, research objectives, epistemological/philosophical paradigms, assessment procedures, data analysis, etc., to account for their research conclusions?

It is important to note that this paper is set as an introduction to CaSE; by extension, it is also set as an introduction to research evaluation and appraisal processes for case study researchers in psychotherapy. As such, it was important to provide a step-by-step epistemological rationale and process behind the development of CaSE evaluative framework and checklist. However, this also means that further research needs to be conducted in order to develop the tool. While CaSE Purpose - based Evaluative Framework involves some of the core questions that pertain to the nature of all six purpose-based systematic case studies, there is a need to develop individual and comprehensive CaSE evaluative frameworks for each of the purpose-based systematic case studies in the future. This line of research is likely to enhance CaSE target audience: clinicians interested in reviewing highly particular clinical narratives will attend to descriptive case study appraisal frameworks; researchers working with qualitative meta-synthesis will find transferable case study appraisal frameworks most relevant to their work; while teachers on psychotherapy and counselling modules may seek out unique case study appraisal frameworks.

Furthermore, although CaSE Checklist for Essential Components in Systematic Case Studies and CaSE Purpose - based Evaluative Framework for Systematic Case Studies are presented in a comprehensive, detailed manner, with definitions and examples that would enable reviewers to have a good grasp of the appraisal process, it is likely that different reviewers may have different interpretations or ideas of what might be ‘substantial’ case study data. This, in part, is due to the methodologically pluralistic nature of the case study genre itself; what is relevant for one case study may not be relevant for another, and vice-versa. To aid with the review process, future research on CaSE should include a comprehensive paper on using the tool. This paper should involve evaluation examples with all six purpose-based systematic case studies, as well as a ‘search’ exercise (using CaSE to assess the relevance of case studies for one’s research, practice, training, etc.).

Finally, further research needs to be developed on how (and, indeed, whether) systematic case studies should be reviewed with specific ‘grades’ or ‘assessments’ that go beyond the qualitative examination in Table 14 . This would be particularly significant for the processes of qualitative meta-synthesis and meta-analysis. These research developments will further enhance CaSE tool, and, in turn, enable psychotherapy researchers to appraise their findings within clear, purpose-based evaluative criteria appropriate for systematic case studies.

Availability of data and materials

Not applicable.

Almond, R. (2004). “I Can Do It (All) Myself”: Clinical technique with defensive narcissistic self–sufficiency. Psychoanalytic Psychology , 21 (3), 371–384. https://doi.org/10.1037/0736-9735.21.3.371 .

Article   Google Scholar  

American Psychological Association (2010). Evidence–based case study. Retrieved from https://www.apa.org/pubs/journals/pst/evidence–based–case–study.

Google Scholar  

Aveline, M. (2005). Clinical case studies: Their place in evidence–based practice. Psychodynamic Practice: Individuals, Groups and Organisations , 11 (2), 133–152. https://doi.org/10.1080/14753630500108174 .

Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy , 10 (6), 319–327. https://doi.org/10.1002/cpp.379 .

Berg, H. (2019). How does evidence–based practice in psychology work? – As an ethical demarcation. Philosophical Psychology , 32 (6), 853–873. https://doi.org/10.1080/09515089.2019.1632424 .

Berg, H., & Slaattelid, R. (2017). Facts and values in psychotherapy—A critique of the empirical reduction of psychotherapy within evidence-based practice. Journal of Evaluation in Clinical Practice , 23 (5), 1075–1080. https://doi.org/10.1111/jep.12739 .

Article   PubMed   Google Scholar  

Bower, P. (2003). Efficacy in evidence-based practice. Clinical Psychology and Psychotherapy , 10 (6), 328–336. https://doi.org/10.1002/cpp.380 .

Cartwright, N., & Hardie, J. (2012). What are RCTs good for? In N. Cartwright, & J. Hardie (Eds.), Evidence–based policy: A practical guide to doing it better . Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199841608.003.0008 .

Critical Appraisal Skills Programme (CASP). (2018). Qualitative checklist. Retrieved from https://casp–uk.net/wp–content/uploads/2018/01/CASP–Qualitative–Checklist–2018.pdf .

Davison, G. C., & Lazarus, A. A. (2007). Clinical case studies are important in the science and practice of psychotherapy. In S. O. Lilienfeld, & W. T. O’Donohue (Eds.), The great ideas of clinical science: 17 principles that every mental health professional should understand , (pp. 149–162). Routledge/Taylor & Francis Group.

Douglas, H. (2004). The irreducible complexity of objectivity. Synthese , 138 (3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91 .

Duncan, B. L., & Sparks, J. A. (2020). When meta–analysis misleads: A critical case study of a meta–analysis of client feedback. Psychological Services , 17 (4), 487–496. https://doi.org/10.1037/ser0000398 .

Edelson, M. (1986). Causal explanation in science and in psychoanalysis—Implications for writing a case study. Psychoanalytic Study of Child , 41 (1), 89–127. https://doi.org/10.1080/00797308.1986.11823452 .

Edwards, D. J. A. (2013). Collaborative versus adversarial stances in scientific discourse: Implications for the role of systematic case studies in the development of evidence–based practice in psychotherapy. Pragmatic Case Studies in Psychotherapy , 3 (1), 6–34.

Edwards, D. J. A., Dattilio, F. M., & Bromley, D. B. (2004). Developing evidence–based practice: The role of case–based research. Professional Psychology: Research and Practice , 35 (6), 589–597. https://doi.org/10.1037/0735-7028.35.6.589 .

Erickson, F. (2012). Comments on causality in qualitative inquiry. Qualitative Inquiry , 18 (8), 686–688. https://doi.org/10.1177/1077800412454834 .

Fishman, D. B. (1999). The case for pragmatic psychology . New York University Press.

Fishman, D. B. (2005). Editor’s introduction to PCSP––From single case to database: A new method for enhancing psychotherapy practice. Pragmatic Case Studies in Psychotherapy , 1 (1), 1–50.

Fishman, D. B., Messer, S. B., Edwards, D. J. A., & Dattilio, F. M. (Eds.) (2017). Case studies within psychotherapy trials: Integrating qualitative and quantitative methods . Oxford University Press.

Fox, N. J. (2003). Practice–based evidence: Towards collaborative and transgressive research. Sociology , 37 (1), 81–102. https://doi.org/10.1177/0038038503037001388 .

Gabbay, J., & le May, A. (2011). Practice–based evidence for healthcare: Clinical mindlines . Routledge.

Green, L. W., & Latchford, G. (2012). Maximising the benefits of psychotherapy: A practice–based evidence approach . Wiley–Blackwell. https://doi.org/10.1002/9781119967590 .

Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research , 20 (12), 1736–1743. https://doi.org/10.1177/1049732310378656 .

Hartling, L., Chisholm, A., Thomson, D., & Dryden, D. M. (2012). A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One , 7 (11), e49667. https://doi.org/10.1371/journal.pone.0049667 .

Article   PubMed   PubMed Central   Google Scholar  

Hill, A., & Spittlehouse, C. (2003). What is critical appraisal? Evidence–Based Medicine , 3 (2), 1–8.

Hilliard, R. B. (1993). Single–case methodology in psychotherapy process and outcome research. Journal of Consulting and Clinical Psychology , 61 (3), 373–380. https://doi.org/10.1037/0022-006X.61.3.373 .

Horn, S. D., & Gassaway, J. (2007). Practice–based evidence study design for comparative effectiveness research. Medical Care , 45 (10), S50–S57. https://doi.org/10.1097/MLR.0b013e318070c07b .

Iwakabe, S. (2003, May). Common change events in stages of psychotherapy: A qualitative analysis of case reports. In Paper presented at the 19th Annual Conference of the Society for Exploration of Psychotherapy Integration, New York .

Iwakabe, S. (2005). Pragmatic meta–analysis of case studies. Annual Progress of Family Psychology , 23 , 154–169.

Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research , 19 (4-5), 601–611. https://doi.org/10.1080/10503300802688494 .

Jimenez-Buedo, M., & Miller, L. (2010). Why a Trade–Off? The relationship between the external and internal validity of experiments. THEORIA: An International Journal for Theory History and Foundations of Science , 25 (3), 301–321.

Joanna Briggs Institute (JBI). (2017a). Critical appraisal checklist for qualitative research. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Qualitative_Research2017_0.pdf

Joanna Briggs Institute (JBI). (2017b). Checklist for case reports. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Case_Reports2017_0.pdf

Kaluzeviciute, G. (2021). Validity, Evidence and Appraisal in Systematic Psychotherapy Case Studies . Paper presented at the Research Forum of Department of Psychosocial and Psychoanalytic Studies, University of Essex, Colchester, UK. https://doi.org/10.13140/RG.2.2.33502.15683  

Kaluzeviciute, G., & Willemsen, J. (2020). Scientific thinking styles: The different ways of thinking in psychoanalytic case studies. The International Journal of Psychoanalysis , 101 (5), 900–922. https://doi.org/10.1080/00207578.2020.1796491 .

Katrak, P., Bialocerkowski, A. E., Massy-Westropp, N., Kumar, S. V. S., & Grimmer, K. (2004). A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology , 4 (1), 22. https://doi.org/10.1186/1471-2288-4-22 .

Kennedy, M. M. (1979). Generalising from single case studies. Evaluation Quarterly , 3 (4), 661–678. https://doi.org/10.1177/0193841X7900300409 .

Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence–based practice in psychotherapy: A common factors perspective. Psychotherapy , 51 (4), 467–481. https://doi.org/10.1037/a0034332 .

Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology , 4 (1), 2–22. https://doi.org/10.1037/qup0000082 .

Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2013). Why many clinical psychologists are resistant to evidence–based practice: root causes and constructive remedies. Clinical Psychology Review , 33 (7), 883–900. https://doi.org/10.1016/j.cpr.2012.09.008 .

Long, A. F., & Godfrey, M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology , 7 (2), 181–196. https://doi.org/10.1080/1364557032000045302 .

Longhofer, J., Floersch, J., & Hartmann, E. A. (2017). Case for the case study: How and why they matter. Clinical Social Work Journal , 45 (3), 189–200. https://doi.org/10.1007/s10615-017-0631-8 .

Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy , 53 (2), 206–215. https://doi.org/10.1037/pst0000052 .

Mackrill, T., & Iwakabe, S. (2013). Making a case for case studies in psychotherapy training: A small step towards establishing an empirical basis for psychotherapy training. Counselling Psychotherapy Quarterly , 26 (3–4), 250–266. https://doi.org/10.1080/09515070.2013.832148 .

Maggio, S., Molgora, S., & Oasi, O. (2019). Analyzing psychotherapeutic failures: A research on the variables involved in the treatment with an individual setting of 29 cases. Frontiers in Psychology , 10 , 1250. https://doi.org/10.3389/fpsyg.2019.01250 .

Mahrer, A. R. (1988). Discovery–oriented psychotherapy research: Rationale, aims, and methods. American Psychologist , 43 (9), 694–702. https://doi.org/10.1037/0003-066X.43.9.694 .

Margison, F. B., et al. (2000). Measurement and psychotherapy: Evidence–based practice and practice–based evidence. British Journal of Psychiatry , 177 (2), 123–130. https://doi.org/10.1192/bjp.177.2.123 .

Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher , 33 (2), 3–11. https://doi.org/10.3102/0013189X033002003 .

McLeod, J. (2002). Case studies and practitioner research: Building knowledge through systematic inquiry into individual cases. Counselling and Psychotherapy Research: Linking research with practice , 2 (4), 264–268. https://doi.org/10.1080/14733140212331384755 .

McLeod, J. (2010). Case study research in counselling and psychotherapy . SAGE Publications. https://doi.org/10.4135/9781446287897 .

McLeod, J., & Elliott, R. (2011). Systematic case study research: A practice–oriented introduction to building an evidence base for counselling and psychotherapy. Counselling and Psychotherapy Research , 11 (1), 1–10. https://doi.org/10.1080/14733145.2011.548954 .

Meganck, R., Inslegers, R., Krivzov, J., & Notaerts, L. (2017). Beyond clinical case studies in psychoanalysis: A review of psychoanalytic empirical single case studies published in ISI–ranked journals. Frontiers in Psychology , 8 , 1749. https://doi.org/10.3389/fpsyg.2017.01749 .

Merriam, S. B. (1998). Qualitative research and case study applications in education . Jossey–Bass Publishers.

Michels, R. (2000). The case history. Journal of the American Psychoanalytic Association , 48 (2), 355–375. https://doi.org/10.1177/00030651000480021201 .

Midgley, N. (2006). Re–reading “Little Hans”: Freud’s case study and the question of competing paradigms in psychoanalysis. Journal of the American Psychoanalytic Association , 54 (2), 537–559. https://doi.org/10.1177/00030651060540021601 .

Rosqvist, J., Thomas, J. C., & Truax, P. (2011). Effectiveness versus efficacy studies. In J. C. Thomas, & M. Hersen (Eds.), Understanding research in clinical and counseling psychology , (pp. 319–354). Routledge/Taylor & Francis Group.

Sackett, D. L., Rosenberg, W. M., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn’t. BMJ , 312 (7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71 .

Stake, R. E. (1995). The art of case study research . SAGE Publications.

Stake, R. E. (2010). Qualitative research: Studying how things work . The Guilford Press.

Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology , 63 (3), 267–281. https://doi.org/10.1002/jclp.20347 .

Stiles, W. B. (2007). Theory–building case studies of counselling and psychotherapy. Counselling and Psychotherapy Research , 7 (2), 122–127. https://doi.org/10.1080/14733140701356742 .

Teachman, B. A., Drabick, D. A., Hershenberg, R., Vivian, D., Wolfe, B. E., & Goldfried, M. R. (2012). Bridging the gap between clinical research and clinical practice: introduction to the special section. Psychotherapy , 49 (2), 97–100. https://doi.org/10.1037/a0027346 .

Thorne, S., Jensen, L., Kearney, M. H., Noblit, G., & Sandelowski, M. (2004). Qualitative metasynthesis: Reflections on methodological orientation and ideological agenda. Qualitative Health Research , 14 (10), 1342–1365. https://doi.org/10.1177/1049732304269888 .

Timulak, L. (2009). Meta–analysis of qualitative studies: A tool for reviewing qualitative research findings in psychotherapy. Psychotherapy Research , 19 (4–5), 591–600. https://doi.org/10.1080/10503300802477989 .

Trad, P. V., & Raine, M. J. (1994). A prospective interpretation of unconscious processes during psychoanalytic psychotherapy. Psychoanalytic Psychology , 11 (1), 77–100. https://doi.org/10.1037/h0079522 .

Truijens, F., Cornelis, S., Desmet, M., & De Smet, M. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research. Frontiers in Psychology , 10 . https://doi.org/10.3389/fpsyg.2019.00532 .

Tuckett, D. (Ed.) (2008). The new library of psychoanalysis. Psychoanalysis comparable and incomparable: The evolution of a method to describe and compare psychoanalytic approaches . Routledge/Taylor & Francis Group. https://doi.org/10.4324/9780203932551 .

van Hennik, R. (2020). Practice based evidence based practice, part II: Navigating complexity and validity from within. Journal of Family Therapy , 43 (1), 27–45. https://doi.org/10.1111/1467-6427.12291 .

Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin , 130 (4), 631–663. https://doi.org/10.1037/0033-2909.130.4.631 .

Widdowson, M. (2011). Case study research methodology. International Journal of Transactional Analysis Research & Practice , 2 (1). https://doi.org/10.29044/v2i1p25 .

Willemsen, J., Della Rosa, E., & Kegerreis, S. (2017). Clinical case studies in psychoanalytic and psychodynamic treatment. Frontiers in Psychology , 8 (108). https://doi.org/10.3389/fpsyg.2017.00108 .

Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine . https://doi.org/10.1136/bmjebm-2018-111132 .

Yin, R. K. (1984). Case study research: Design and methods . SAGE Publications.

Yin, R. K. (1993). Applications of case study research . SAGE Publications.

Download references

Acknowledgments

I would like to thank Prof Jochem Willemsen (Faculty of Psychology and Educational Sciences, Université catholique de Louvain-la-Neuve), Prof Wayne Martin (School of Philosophy and Art History, University of Essex), Dr Femke Truijens (Institute of Psychology, Erasmus University Rotterdam) and the reviewers of Psicologia: Reflexão e Crítica / Psychology : Research and Review for their feedback, insight and contributions to the manuscript.

Arts and Humanities Research Council (AHRC) and Consortium for Humanities and the Arts South-East England (CHASE) Doctoral Training Partnership, Award Number [AH/L50 3861/1].

Author information

Authors and affiliations.

Department of Psychosocial and Psychoanalytic Studies, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK

Greta Kaluzeviciute

You can also search for this author in PubMed   Google Scholar

Contributions

GK is the sole author of the manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Greta Kaluzeviciute .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kaluzeviciute, G. Appraising psychotherapy case studies in practice-based evidence: introducing Case Study Evaluation-tool (CaSE). Psicol. Refl. Crít. 34 , 9 (2021). https://doi.org/10.1186/s41155-021-00175-y

Download citation

Received : 12 January 2021

Accepted : 09 March 2021

Published : 19 March 2021

DOI : https://doi.org/10.1186/s41155-021-00175-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic case studies
  • Psychotherapy research
  • Research appraisal tool
  • Evidence-based practice
  • Practice-based evidence
  • Research validity

critical case study evaluation

Guide to case studies

What is a case study.

A case study is an in depth focussed study of a person, group, or situation that has been studied over time within its real-life context.

There are different types of case study:

  • Illustrative case studies describe an unfamiliar situation in order to help people understand it.
  • Critical instance case studies focus on a unique case, without a generalised purpose.
  • Exploratory case studies are preliminary projects to help guide a future, larger-scale project. They aim to identify research questions and possible research approaches.

We are often looking to develop patient stories as case studies and these will use qualitative methods such as interviews to find specific details and descriptions of how your subject is affected.

Patient stories are illustrative or critical instance case studies. For example, an illustrative case study might focus on a patient with an eating disorder to provide a subjective view to better help trainee nutritionists understand the illness.  A critical instance case study might focus on a patient with a very rare or uniquely complex condition or how a single patient is affected by an injury.

How do you do a case study?

1. get prepared.

  • Be very clear about the purpose of the case study, why you are doing it and what it will be used for?
  • Think about the questions you want to answer? What are your research or evaluation questions?
  • Determine what kind of case study will best suit your needs? Illustrative, Critical Instance or Exploratory?
  • Define the subject of study – is it an individual, a small group of people, or a specific situation?
  • Determine if you need ethical approval to conduct this case study – you may be asked to prove that the case study will do no harm to its participant(s).

2. Get designing!

  • Finalise your research or evaluation questions – i.e. what you want to know at the end of the study. Limit these to a manageable number – no more than 4 or 5.
  • Think about where you will find the information you need to answer your questions.  Interviewing research subjects and/ or observing will likely be the central methods of your case study, but do you need to look to additional data sources as well? For example, desk research or evidence/literature reviewing, interviewing experts, other fieldwork and so on.
  • Create a plan outlining how you will gather the information you need to answer your research or evaluation questions. Include a timeframe and be clear that you have the resources and equipment to carry out the work. Depending on the nature of the case study or the topic being studied a case study may require several meetings/interviews over a period of many months, or it might need just a one off interview. What does yours need?
  • Decide on the exact subject of the study. Is this a specific person or a small group of people? If yes, plan how you will get in touch with them and invite them to take part in the case study. How flexible can you be in terms of time and travel? Does this limit your access to potential participants?
  • Design interview questions that are open and will enable the participant to provide in-depth answers. Avoid questions that can be answered with a single yes or no and make sure the questions are flexible and allow the participant to talk openly and freely.

3. Get recruiting!

  • You may have a specific individual in mind, or specific criteria. You will need to invite people to participate and make very clear that they are able to withdraw at any point.
  • You will need consent from the participants. Make sure the purpose of the case study, why you are doing it and what it will be used, the methods and time frames are extremely clear to the potential participants. You will need written consent that demonstrates that the participant understands this. Additionally, if you intend to digitally record an interview or take notes, make sure you have permission from the participants’ first.
  • If your central method is observation, this will be open observation – the participant must be aware of your presence and agreed to it – you are not allowed to observe without the participants’ permission!

4. Get conducting!

  • Interviewing – Agree a mutually suitable time and venue for the case study interview. This may be a one off or the first of many over several months. Make sure the participant is in an environment they are comfortable and able to talk in. Equally important, however is that the environment is safe for you and is conducive to conducting a case study interview – i.e.  If it is a private space, are you safe? If it is a public space make sure it is not too noisy or likely to be affected by interruptions.
  • Decide what is the best method of recording the interview information – digital recording is less intrusive and you can engage better in the conversation, than if you attempt to just take notes. Taking notes can mean that your concentration is focused on the writing rather than the listening and you can miss vital points. It can also be off-putting for the participant if there is no eye contact because you are scribing throughout the conversation. However, some participants will not like to be digitally recorded – so it is best to discuss this with them first. If you are digitally recording always test the equipment first. Even if you are digitally recording you will still need to take notes on key points, or things that you would like to investigate further, questions that arise or points at which you don’t want to interrupt the conversation or anything that will not be captured by the recording, such as body language or other observations.
  • Depending on the total length of your case study, you might hold a one off interview, interview weekly, once every month or two, or just once or twice a year. Begin with the interview questions you prepared in the preparation and design phases, then iterate to dig deeper into the topics. Ask about experience and meaning — ask the participant what it’s like to go through the experience you’re studying and what the experience means to them. Later interviews are an opportunity to ask questions that fill gaps in your knowledge, or that are particularly relevant to the development of the case study or in answering your questions.
  • Observing – recording observation can be done manually – i.e. taking notes – or digitally via a camcorder or similar. It is important to capture detail about the subject/participant and their interactions with others and the environment, their behaviour and other context an detail that is relevant to your questions.

5. Get analysing!

  • Write up your notes or transcribe (Interviews), make notes (video) from your digital recording. Remember that if you are transcribing it is important to include pauses, laughter and other descriptive sounds and commentary on tone and intonation to better convey the story. Include the contextual information / the external environment and other observations that are important. Such as when and where the interview took place (you will not necessarily make this public) and any issues that arose such as interruptions that affected the interview or if there were multiple interviews anything of significance that happened in the periods between interviews.
  • Thematically code (look for themes) and look for key parts of the interviews that will answer your original questions. Also be very aware that the may be new or unexpected information that has come through the process that is very important or interesting.
  • Arrange the notes or transcriptions from the interviews and, or observations into a case study. It is not likely that you will be able to use the transcriptions without reorganising them, but if you are rewriting the story in your own words, be careful not to lose the meaning and language that reflects the participant.

6. Get sign off!

  • Once you have drafted your case study make sure the participant(s) have sight of it and an opportunity to say whether you have captured their story and are representing it/them as they would like.

7. Get disseminating!

  • More information about disseminating evaluations and case studies can be found on the  Evaluation Toolkit site .
  • Remember case studies are not designed for large group studies or statistical analysis and do not aim to answer a research question definitively.
  • Do background/context research where possible.
  • Establishing trust with participants is crucial and can result in less inhibited behaviour. Observing people in their home, workplaces, or other “natural” environments may be more effective than bringing them to a laboratory or office.
  • Be aware that if you are observing it is likely that because subjects know they are being studied, their behaviour will change.
  • Take notes -Extensive notes during observation will be vital.
  • Take notes even if you are digitally recoding an interview to capture your own thinking, points to follow up on or observations.
  • In some case studies, it may be appropriate to ask the participant to record experiences in a diary – especially if there are periods between your interviews or observations that you wish to capture data on.
  • Stay rigorous. A case study may feel less data-driven than a medical trial or a scientific experiment, but attention to rigor and valid methodology remains vital.
  • When reviewing your notes, discard possible conclusions that do not have detailed observation or evidence backing them up.
  • A case study might reveal new and unexpected results, and lead to research taking new directions.
  • A case study cannot be generalised to fit a whole population.
  • Since you aren’t conducting a statistical analysis, you do not need to recruit a diverse cross-section of society. You should be aware of any biases in your small sample, and make them clear in your report, but they do not invalidate your research.
  • Useful resource: ‘Case Study Research: Design and Methods’, Robert K Yin, SAGE publications 2013.

Case studies

Find inspiration for your own evaluation with these real life examples

Guidance from a range of organisations for in-depth advice

Services and support

Knowledgeable organisations who may be able to help you

Training resources

Want to learn more? Our training resources are a good place to start

The Evaluation and Evidence toolkits go hand in hand. Using and generating evidence to inform decision making is vital to improving services and people’s lives.

The toolkits have been developed by the NHS Bristol, North Somerset and South Gloucestershire Integrated Care Board (BNSSG ICB), the National Institute for Health and Care Research Applied Research Collaboration West (NIHR ARC West) and Health Innovation West of England .

National Health Service

We use cookies to give you the best experience of our website. By browsing you agree to our use of cookies.

Site logo

  • Case Study Evaluation Approach
  • Learning Center

A case study evaluation approach can be an incredibly powerful tool for monitoring and evaluating complex programs and policies. By identifying common themes and patterns, this approach allows us to better understand the successes and challenges faced by the program. In this article, we’ll explore the benefits of using a case study evaluation approach in the monitoring and evaluation of projects, programs, and public policies.

Table of Contents

Introduction to Case Study Evaluation Approach

The advantages of a case study evaluation approach, types of case studies, potential challenges with a case study evaluation approach, guiding principles for successful implementation of a case study evaluation approach.

  • Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programs

A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups.

An individual, a location, or a project may serve as the focal point of a case study’s attention. Quantitative and qualitative data are frequently used in conjunction with one another.

It also allows the researcher to gain insights into how people react to external influences. By using a case study evaluation approach, researchers can gain insights into how certain factors such as policy change or a new technology have impacted individuals and communities. The data gathered through this approach can be used to formulate effective strategies for responding to changes and challenges. Ultimately, this monitoring and evaluation approach helps organizations make better decision about the implementation of their plans.

This approach can be used to assess the effectiveness of a policy, program, or initiative by considering specific elements such as implementation processes, outcomes, and impact. A case study evaluation approach can provide an in-depth understanding of the effectiveness of a program by closely examining the processes involved in its implementation. This includes understanding the context, stakeholders, and resources to gain insight into how well a program is functioning or has been executed. By evaluating these elements, it can help to identify areas for improvement and suggest potential solutions. The findings from this approach can then be used to inform decisions about policies, programs, and initiatives for improved outcomes.

It is also useful for determining if other policies, programs, or initiatives could be applied to similar situations in order to achieve similar results or improved outcomes. All in all, the case study monitoring evaluation approach is an effective method for determining the effectiveness of specific policies, programs, or initiatives. By researching and analyzing the successes of previous cases, this approach can be used to identify similar approaches that could be applied to similar situations in order to achieve similar results or improved outcomes.

A case study evaluation approach offers the advantage of providing in-depth insight into a particular program or policy. This can be accomplished by analyzing data and observations collected from a range of stakeholders such as program participants, service providers, and community members. The monitoring and evaluation approach is used to assess the impact of programs and inform the decision-making process to ensure successful implementation. The case study monitoring and evaluation approach can help identify any underlying issues that need to be addressed in order to improve program effectiveness. It also provides a reality check on how successful programs are actually working, allowing organizations to make adjustments as needed. Overall, a case study monitoring and evaluation approach helps to ensure that policies and programs are achieving their objectives while providing valuable insight into how they are performing overall.

By taking a qualitative approach to data collection and analysis, case study evaluations are able to capture nuances in the context of a particular program or policy that can be overlooked when relying solely on quantitative methods. Using this approach, insights can be gleaned from looking at the individual experiences and perspectives of actors involved, providing a more detailed understanding of the impact of the program or policy than is possible with other evaluation methodologies. As such, case study monitoring evaluation is an invaluable tool in assessing the effectiveness of a particular initiative, enabling more informed decision-making as well as more effective implementation of programs and policies.

Furthermore, this approach is an effective way to uncover experiential information that can help to inform the ongoing improvement of policy and programming over time All in all, the case study monitoring evaluation approach offers an effective way to uncover experiential information necessary to inform the ongoing improvement of policy and programming. By analyzing the data gathered from this systematic approach, stakeholders can gain deeper insight into how best to make meaningful and long-term changes in their respective organizations.

Case studies come in a variety of forms, each of which can be put to a unique set of evaluation tasks. Evaluators have come to a consensus on describing six distinct sorts of case studies, which are as follows: illustrative, exploratory, critical instance, program implementation, program effects, and cumulative.

Illustrative Case Study

An illustrative case study is a type of case study that is used to provide a detailed and descriptive account of a particular event, situation, or phenomenon. It is often used in research to provide a clear understanding of a complex issue, and to illustrate the practical application of theories or concepts.

An illustrative case study typically uses qualitative data, such as interviews, surveys, or observations, to provide a detailed account of the unit being studied. The case study may also include quantitative data, such as statistics or numerical measurements, to provide additional context or to support the qualitative data.

The goal of an illustrative case study is to provide a rich and detailed description of the unit being studied, and to use this information to illustrate broader themes or concepts. For example, an illustrative case study of a successful community development project may be used to illustrate the importance of community engagement and collaboration in achieving development goals.

One of the strengths of an illustrative case study is its ability to provide a detailed and nuanced understanding of a particular issue or phenomenon. By focusing on a single case, the researcher is able to provide a detailed and in-depth analysis that may not be possible through other research methods.

However, one limitation of an illustrative case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a single unit, it may not be representative of other similar units or situations.

A well-executed case study can shed light on wider research topics or concepts through its thorough and descriptive analysis of a specific event or phenomenon.

Exploratory Case Study

An exploratory case study is a type of case study that is used to investigate a new or previously unexplored phenomenon or issue. It is often used in research when the topic is relatively unknown or when there is little existing literature on the topic.

Exploratory case studies are typically qualitative in nature and use a variety of methods to collect data, such as interviews, observations, and document analysis. The focus of the study is to gather as much information as possible about the phenomenon being studied and to identify new and emerging themes or patterns.

The goal of an exploratory case study is to provide a foundation for further research and to generate hypotheses about the phenomenon being studied. By exploring the topic in-depth, the researcher can identify new areas of research and generate new questions to guide future research.

One of the strengths of an exploratory case study is its ability to provide a rich and detailed understanding of a new or emerging phenomenon. By using a variety of data collection methods, the researcher can gather a broad range of data and perspectives to gain a more comprehensive understanding of the phenomenon being studied.

However, one limitation of an exploratory case study is that the findings may not be generalizable to other contexts or populations. Because the study is focused on a new or previously unexplored phenomenon, the findings may not be applicable to other situations or populations.

Exploratory case studies are an effective research strategy for learning about novel occurrences, developing research hypotheses, and gaining a deep familiarity with a topic of study.

Critical Instance Case Study

A critical instance case study is a type of case study that focuses on a specific event or situation that is critical to understanding a broader issue or phenomenon. The goal of a critical instance case study is to analyze the event in depth and to draw conclusions about the broader issue or phenomenon based on the analysis.

A critical instance case study typically uses qualitative data, such as interviews, observations, or document analysis, to provide a detailed and nuanced understanding of the event being studied. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.

The critical instance case study is often used in research when a particular event or situation is critical to understanding a broader issue or phenomenon. For example, a critical instance case study of a successful disaster response effort may be used to identify key factors that contributed to the success of the response, and to draw conclusions about effective disaster response strategies more broadly.

One of the strengths of a critical instance case study is its ability to provide a detailed and in-depth analysis of a particular event or situation. By focusing on a critical instance, the researcher is able to provide a rich and nuanced understanding of the event, and to draw conclusions about broader issues or phenomena based on the analysis.

However, one limitation of a critical instance case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific event or situation, the findings may not be applicable to other similar events or situations.

A critical instance case study is a valuable research method that can provide a detailed and nuanced understanding of a particular event or situation and can be used to draw conclusions about broader issues or phenomena based on the analysis.

Program Implementation Program Implementation

A program implementation case study is a type of case study that focuses on the implementation of a particular program or intervention. The goal of the case study is to provide a detailed and comprehensive account of the program implementation process, and to identify factors that contributed to the success or failure of the program.

Program implementation case studies typically use qualitative data, such as interviews, observations, and document analysis, to provide a detailed and nuanced understanding of the program implementation process. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.

The program implementation case study is often used in research to evaluate the effectiveness of a particular program or intervention, and to identify strategies for improving program implementation in the future. For example, a program implementation case study of a school-based health program may be used to identify key factors that contributed to the success or failure of the program, and to make recommendations for improving program implementation in similar settings.

One of the strengths of a program implementation case study is its ability to provide a detailed and comprehensive account of the program implementation process. By using qualitative data, the researcher is able to capture the complexity and nuance of the implementation process, and to identify factors that may not be captured by quantitative data alone.

However, one limitation of a program implementation case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific program or intervention, the findings may not be applicable to other programs or interventions in different settings.

An effective research tool, a case study of program implementation may illuminate the intricacies of the implementation process and point the way towards future enhancements.

Program Effects Case Study

A program effects case study is a research method that evaluates the effectiveness of a particular program or intervention by examining its outcomes or effects. The purpose of this type of case study is to provide a detailed and comprehensive account of the program’s impact on its intended participants or target population.

A program effects case study typically employs both quantitative and qualitative data collection methods, such as surveys, interviews, and observations, to evaluate the program’s impact on the target population. The data is then analyzed using statistical and thematic analysis to identify patterns and themes that emerge from the data.

The program effects case study is often used to evaluate the success of a program and identify areas for improvement. For example, a program effects case study of a community-based HIV prevention program may evaluate the program’s effectiveness in reducing HIV transmission rates among high-risk populations and identify factors that contributed to the program’s success.

One of the strengths of a program effects case study is its ability to provide a detailed and nuanced understanding of a program’s impact on its intended participants or target population. By using both quantitative and qualitative data, the researcher can capture both the objective and subjective outcomes of the program and identify factors that may have contributed to the outcomes.

However, a limitation of the program effects case study is that it may not be generalizable to other populations or contexts. Since the case study focuses on a particular program and population, the findings may not be applicable to other programs or populations in different settings.

A program effects case study is a good way to do research because it can give a detailed look at how a program affects the people it is meant for. This kind of case study can be used to figure out what needs to be changed and how to make programs that work better.

Cumulative Case Study

A cumulative case study is a type of case study that involves the collection and analysis of multiple cases to draw broader conclusions. Unlike a single-case study, which focuses on one specific case, a cumulative case study combines multiple cases to provide a more comprehensive understanding of a phenomenon.

The purpose of a cumulative case study is to build up a body of evidence through the examination of multiple cases. The cases are typically selected to represent a range of variations or perspectives on the phenomenon of interest. Data is collected from each case using a range of methods, such as interviews, surveys, and observations.

The data is then analyzed across cases to identify common themes, patterns, and trends. The analysis may involve both qualitative and quantitative methods, such as thematic analysis and statistical analysis.

The cumulative case study is often used in research to develop and test theories about a phenomenon. For example, a cumulative case study of successful community-based health programs may be used to identify common factors that contribute to program success, and to develop a theory about effective community-based health program design.

One of the strengths of the cumulative case study is its ability to draw on a range of cases to build a more comprehensive understanding of a phenomenon. By examining multiple cases, the researcher can identify patterns and trends that may not be evident in a single case study. This allows for a more nuanced understanding of the phenomenon and helps to develop more robust theories.

However, one limitation of the cumulative case study is that it can be time-consuming and resource-intensive to collect and analyze data from multiple cases. Additionally, the selection of cases may introduce bias if the cases are not representative of the population of interest.

In summary, a cumulative case study is a valuable research method that can provide a more comprehensive understanding of a phenomenon by examining multiple cases. This type of case study is particularly useful for developing and testing theories and identifying common themes and patterns across cases.

When conducting a case study evaluation approach, one of the main challenges is the need to establish a contextually relevant research design that accounts for the unique factors of the case being studied. This requires close monitoring of the case, its environment, and relevant stakeholders. In addition, the researcher must build a framework for the collection and analysis of data that is able to draw meaningful conclusions and provide valid insights into the dynamics of the case. Ultimately, an effective case study monitoring evaluation approach will allow researchers to form an accurate understanding of their research subject.

Additionally, depending on the size and scope of the case, there may be concerns regarding the availability of resources and personnel that could be allocated to data collection and analysis. To address these issues, a case study monitoring evaluation approach can be adopted, which would involve a mix of different methods such as interviews, surveys, focus groups and document reviews. Such an approach could provide valuable insights into the effectiveness and implementation of the case in question. Additionally, this type of evaluation can be tailored to the specific needs of the case study to ensure that all relevant data is collected and respected.

When dealing with a highly sensitive or confidential subject matter within a case study, researchers must take extra measures to prevent bias during data collection as well as protect participant anonymity while also collecting valid data in order to ensure reliable results

Moreover, when conducting a case study evaluation it is important to consider the potential implications of the data gathered. By taking extra measures to prevent bias and protect participant anonymity, researchers can ensure reliable results while also collecting valid data. Maintaining confidentiality and deploying ethical research practices are essential when conducting a case study to ensure an unbiased and accurate monitoring evaluation.

When planning and implementing a case study evaluation approach, it is important to ensure the guiding principles of research quality, data collection, and analysis are met. To ensure these principles are upheld, it is essential to develop a comprehensive monitoring and evaluation plan. This plan should clearly outline the steps to be taken during the data collection and analysis process. Furthermore, the plan should provide detailed descriptions of the project objectives, target population, key indicators, and timeline. It is also important to include metrics or benchmarks to monitor progress and identify any potential areas for improvement. By implementing such an approach, it will be possible to ensure that the case study evaluation approach yields valid and reliable results.

To ensure successful implementation, it is essential to establish a reliable data collection process that includes detailed information such as the scope of the study, the participants involved, and the methods used to collect data. Additionally, it is important to have a clear understanding of what will be examined through the evaluation process and how the results will be used. All in all, it is essential to establish a sound monitoring evaluation approach for a successful case study implementation. This includes creating a reliable data collection process that encompasses the scope of the study, the participants involved, and the methods used to collect data. It is also imperative to have an understanding of what will be examined and how the results will be utilized. Ultimately, effective planning is key to ensure that the evaluation process yields meaningful insights.

Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programmes

Using a case study approach in monitoring and evaluation allows for a more detailed and in-depth exploration of the project’s success, helping to identify key areas of improvement and successes that may have been overlooked through traditional evaluation. Through this case study method, specific data can be collected and analyzed to identify trends and different perspectives that can support the evaluation process. This data can allow stakeholders to gain a better understanding of the project’s successes and failures, helping them make informed decisions on how to strengthen current activities or shape future initiatives. From a monitoring and evaluation standpoint, this approach can provide an increased level of accuracy in terms of accurately assessing the effectiveness of the project.

This can provide valuable insights into what works—and what doesn’t—when it comes to implementing projects and programs, aiding decision-makers in making future plans that better meet their objectives However, monitoring and evaluation is just one approach to assessing the success of a case study. It does provide a useful insight into what initiatives may be successful, but it is important to note that there are other effective research methods, such as surveys and interviews, that can also help to further evaluate the success of a project or program.

In conclusion, a case study evaluation approach can be incredibly useful in monitoring and evaluating complex programs and policies. By exploring key themes, patterns and relationships, organizations can gain a detailed understanding of the successes, challenges and limitations of their program or policy. This understanding can then be used to inform decision-making and improve outcomes for those involved. With its ability to provide an in-depth understanding of a program or policy, the case study evaluation approach has become an invaluable tool for monitoring and evaluation professionals.

Leave a Comment Cancel Reply

Your email address will not be published.

Login with your Social Account

How strong is my resume.

Only 2% of resumes land interviews.

Land a better, higher-paying career

critical case study evaluation

Jobs for You

Business development associate.

  • United States

Director of Finance and Administration

  • Bosnia and Herzegovina

Request for Information – Collecting Information on Potential Partners for Local Works Evaluation

  • Washington, USA

Principal Field Monitors

Technical expert (health, wash, nutrition, education, child protection, hiv/aids, supplies), survey expert, data analyst, team leader, usaid-bha performance evaluation consultant.

  • International Rescue Committee

Manager II, Institutional Support Program Implementation

Senior human resources associate, energy and environment analyst – usaid bureau for latin america and the caribbean, intern- international project and proposal support, ispi, deputy chief of party, senior accounting associate, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

  • Open access
  • Published: 14 May 2024

Developing a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in Medical Assistance in Dying (MAiD): a mixed method modified e-Delphi study

  • Jocelyn Schroeder 1 ,
  • Barbara Pesut 1 , 2 ,
  • Lise Olsen 2 ,
  • Nelly D. Oelke 2 &
  • Helen Sharp 2  

BMC Nursing volume  23 , Article number:  326 ( 2024 ) Cite this article

31 Accesses

Metrics details

Medical Assistance in Dying (MAiD) was legalized in Canada in 2016. Canada’s legislation is the first to permit Nurse Practitioners (NP) to serve as independent MAiD assessors and providers. Registered Nurses’ (RN) also have important roles in MAiD that include MAiD care coordination; client and family teaching and support, MAiD procedural quality; healthcare provider and public education; and bereavement care for family. Nurses have a right under the law to conscientious objection to participating in MAiD. Therefore, it is essential to prepare nurses in their entry-level education for the practice implications and moral complexities inherent in this practice. Knowing what nursing students think about MAiD is a critical first step. Therefore, the purpose of this study was to develop a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in MAiD in the Canadian context.

The design was a mixed-method, modified e-Delphi method that entailed item generation from the literature, item refinement through a 2 round survey of an expert faculty panel, and item validation through a cognitive focus group interview with nursing students. The settings were a University located in an urban area and a College located in a rural area in Western Canada.

During phase 1, a 56-item survey was developed from existing literature that included demographic items and items designed to measure experience with death and dying (including MAiD), education and preparation, attitudes and beliefs, influences on those beliefs, and anticipated future involvement. During phase 2, an expert faculty panel reviewed, modified, and prioritized the items yielding 51 items. During phase 3, a sample of nursing students further evaluated and modified the language in the survey to aid readability and comprehension. The final survey consists of 45 items including 4 case studies.

Systematic evaluation of knowledge-to-date coupled with stakeholder perspectives supports robust survey design. This study yielded a survey to assess nursing students’ attitudes toward MAiD in a Canadian context.

The survey is appropriate for use in education and research to measure knowledge and attitudes about MAiD among nurse trainees and can be a helpful step in preparing nursing students for entry-level practice.

Peer Review reports

Medical Assistance in Dying (MAiD) is permitted under an amendment to Canada’s Criminal Code which was passed in 2016 [ 1 ]. MAiD is defined in the legislation as both self-administered and clinician-administered medication for the purpose of causing death. In the 2016 Bill C-14 legislation one of the eligibility criteria was that an applicant for MAiD must have a reasonably foreseeable natural death although this term was not defined. It was left to the clinical judgement of MAiD assessors and providers to determine the time frame that constitutes reasonably foreseeable [ 2 ]. However, in 2021 under Bill C-7, the eligibility criteria for MAiD were changed to allow individuals with irreversible medical conditions, declining health, and suffering, but whose natural death was not reasonably foreseeable, to receive MAiD [ 3 ]. This population of MAiD applicants are referred to as Track 2 MAiD (those whose natural death is foreseeable are referred to as Track 1). Track 2 applicants are subject to additional safeguards under the 2021 C-7 legislation.

Three additional proposed changes to the legislation have been extensively studied by Canadian Expert Panels (Council of Canadian Academics [CCA]) [ 4 , 5 , 6 ] First, under the legislation that defines Track 2, individuals with mental disease as their sole underlying medical condition may apply for MAiD, but implementation of this practice is embargoed until March 2027 [ 4 ]. Second, there is consideration of allowing MAiD to be implemented through advanced consent. This would make it possible for persons living with dementia to receive MAID after they have lost the capacity to consent to the procedure [ 5 ]. Third, there is consideration of extending MAiD to mature minors. A mature minor is defined as “a person under the age of majority…and who has the capacity to understand and appreciate the nature and consequences of a decision” ([ 6 ] p. 5). In summary, since the legalization of MAiD in 2016 the eligibility criteria and safeguards have evolved significantly with consequent implications for nurses and nursing care. Further, the number of Canadians who access MAiD shows steady increases since 2016 [ 7 ] and it is expected that these increases will continue in the foreseeable future.

Nurses have been integral to MAiD care in the Canadian context. While other countries such as Belgium and the Netherlands also permit euthanasia, Canada is the first country to allow Nurse Practitioners (Registered Nurses with additional preparation typically achieved at the graduate level) to act independently as assessors and providers of MAiD [ 1 ]. Although the role of Registered Nurses (RNs) in MAiD is not defined in federal legislation, it has been addressed at the provincial/territorial-level with variability in scope of practice by region [ 8 , 9 ]. For example, there are differences with respect to the obligation of the nurse to provide information to patients about MAiD, and to the degree that nurses are expected to ensure that patient eligibility criteria and safeguards are met prior to their participation [ 10 ]. Studies conducted in the Canadian context indicate that RNs perform essential roles in MAiD care coordination; client and family teaching and support; MAiD procedural quality; healthcare provider and public education; and bereavement care for family [ 9 , 11 ]. Nurse practitioners and RNs are integral to a robust MAiD care system in Canada and hence need to be well-prepared for their role [ 12 ].

Previous studies have found that end of life care, and MAiD specifically, raise complex moral and ethical issues for nurses [ 13 , 14 , 15 , 16 ]. The knowledge, attitudes, and beliefs of nurses are important across practice settings because nurses have consistent, ongoing, and direct contact with patients who experience chronic or life-limiting health conditions. Canadian studies exploring nurses’ moral and ethical decision-making in relation to MAiD reveal that although some nurses are clear in their support for, or opposition to, MAiD, others are unclear on what they believe to be good and right [ 14 ]. Empirical findings suggest that nurses go through a period of moral sense-making that is often informed by their family, peers, and initial experiences with MAID [ 17 , 18 ]. Canadian legislation and policy specifies that nurses are not required to participate in MAiD and may recuse themselves as conscientious objectors with appropriate steps to ensure ongoing and safe care of patients [ 1 , 19 ]. However, with so many nurses having to reflect on and make sense of their moral position, it is essential that they are given adequate time and preparation to make an informed and thoughtful decision before they participate in a MAID death [ 20 , 21 ].

It is well established that nursing students receive inconsistent exposure to end of life care issues [ 22 ] and little or no training related to MAiD [ 23 ]. Without such education and reflection time in pre-entry nursing preparation, nurses are at significant risk for moral harm. An important first step in providing this preparation is to be able to assess the knowledge, values, and beliefs of nursing students regarding MAID and end of life care. As demand for MAiD increases along with the complexities of MAiD, it is critical to understand the knowledge, attitudes, and likelihood of engagement with MAiD among nursing students as a baseline upon which to build curriculum and as a means to track these variables over time.

Aim, design, and setting

The aim of this study was to develop a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in MAiD in the Canadian context. We sought to explore both their willingness to be involved in the registered nursing role and in the nurse practitioner role should they chose to prepare themselves to that level of education. The design was a mixed-method, modified e-Delphi method that entailed item generation, item refinement through an expert faculty panel [ 24 , 25 , 26 ], and initial item validation through a cognitive focus group interview with nursing students [ 27 ]. The settings were a University located in an urban area and a College located in a rural area in Western Canada.

Participants

A panel of 10 faculty from the two nursing education programs were recruited for Phase 2 of the e-Delphi. To be included, faculty were required to have a minimum of three years of experience in nurse education, be employed as nursing faculty, and self-identify as having experience with MAiD. A convenience sample of 5 fourth-year nursing students were recruited to participate in Phase 3. Students had to be in good standing in the nursing program and be willing to share their experiences of the survey in an online group interview format.

The modified e-Delphi was conducted in 3 phases: Phase 1 entailed item generation through literature and existing survey review. Phase 2 entailed item refinement through a faculty expert panel review with focus on content validity, prioritization, and revision of item wording [ 25 ]. Phase 3 entailed an assessment of face validity through focus group-based cognitive interview with nursing students.

Phase I. Item generation through literature review

The goal of phase 1 was to develop a bank of survey items that would represent the variables of interest and which could be provided to expert faculty in Phase 2. Initial survey items were generated through a literature review of similar surveys designed to assess knowledge and attitudes toward MAiD/euthanasia in healthcare providers; Canadian empirical studies on nurses’ roles and/or experiences with MAiD; and legislative and expert panel documents that outlined proposed changes to the legislative eligibility criteria and safeguards. The literature review was conducted in three online databases: CINAHL, PsycINFO, and Medline. Key words for the search included nurses , nursing students , medical students , NPs, MAiD , euthanasia , assisted death , and end-of-life care . Only articles written in English were reviewed. The legalization and legislation of MAiD is new in many countries; therefore, studies that were greater than twenty years old were excluded, no further exclusion criteria set for country.

Items from surveys designed to measure similar variables in other health care providers and geographic contexts were placed in a table and similar items were collated and revised into a single item. Then key variables were identified from the empirical literature on nurses and MAiD in Canada and checked against the items derived from the surveys to ensure that each of the key variables were represented. For example, conscientious objection has figured prominently in the Canadian literature, but there were few items that assessed knowledge of conscientious objection in other surveys and so items were added [ 15 , 21 , 28 , 29 ]. Finally, four case studies were added to the survey to address the anticipated changes to the Canadian legislation. The case studies were based upon the inclusion of mature minors, advanced consent, and mental disorder as the sole underlying medical condition. The intention was to assess nurses’ beliefs and comfort with these potential legislative changes.

Phase 2. Item refinement through expert panel review

The goal of phase 2 was to refine and prioritize the proposed survey items identified in phase 1 using a modified e-Delphi approach to achieve consensus among an expert panel [ 26 ]. Items from phase 1 were presented to an expert faculty panel using a Qualtrics (Provo, UT) online survey. Panel members were asked to review each item to determine if it should be: included, excluded or adapted for the survey. When adapted was selected faculty experts were asked to provide rationale and suggestions for adaptation through the use of an open text box. Items that reached a level of 75% consensus for either inclusion or adaptation were retained [ 25 , 26 ]. New items were categorized and added, and a revised survey was presented to the panel of experts in round 2. Panel members were again asked to review items, including new items, to determine if it should be: included, excluded, or adapted for the survey. Round 2 of the modified e-Delphi approach also included an item prioritization activity, where participants were then asked to rate the importance of each item, based on a 5-point Likert scale (low to high importance), which De Vaus [ 30 ] states is helpful for increasing the reliability of responses. Items that reached a 75% consensus on inclusion were then considered in relation to the importance it was given by the expert panel. Quantitative data were managed using SPSS (IBM Corp).

Phase 3. Face validity through cognitive interviews with nursing students

The goal of phase 3 was to obtain initial face validity of the proposed survey using a sample of nursing student informants. More specifically, student participants were asked to discuss how items were interpreted, to identify confusing wording or other problematic construction of items, and to provide feedback about the survey as a whole including readability and organization [ 31 , 32 , 33 ]. The focus group was held online and audio recorded. A semi-structured interview guide was developed for this study that focused on clarity, meaning, order and wording of questions; emotions evoked by the questions; and overall survey cohesion and length was used to obtain data (see Supplementary Material 2  for the interview guide). A prompt to “think aloud” was used to limit interviewer-imposed bias and encourage participants to describe their thoughts and response to a given item as they reviewed survey items [ 27 ]. Where needed, verbal probes such as “could you expand on that” were used to encourage participants to expand on their responses [ 27 ]. Student participants’ feedback was collated verbatim and presented to the research team where potential survey modifications were negotiated and finalized among team members. Conventional content analysis [ 34 ] of focus group data was conducted to identify key themes that emerged through discussion with students. Themes were derived from the data by grouping common responses and then using those common responses to modify survey items.

Ten nursing faculty participated in the expert panel. Eight of the 10 faculty self-identified as female. No faculty panel members reported conscientious objector status and ninety percent reported general agreement with MAiD with one respondent who indicated their view as “unsure.” Six of the 10 faculty experts had 16 years of experience or more working as a nurse educator.

Five nursing students participated in the cognitive interview focus group. The duration of the focus group was 2.5 h. All participants identified that they were born in Canada, self-identified as female (one preferred not to say) and reported having received some instruction about MAiD as part of their nursing curriculum. See Tables  1 and 2 for the demographic descriptors of the study sample. Study results will be reported in accordance with the study phases. See Fig.  1 for an overview of the results from each phase.

figure 1

Fig. 1  Overview of survey development findings

Phase 1: survey item generation

Review of the literature identified that no existing survey was available for use with nursing students in the Canadian context. However, an analysis of themes across qualitative and quantitative studies of physicians, medical students, nurses, and nursing students provided sufficient data to develop a preliminary set of items suitable for adaptation to a population of nursing students.

Four major themes and factors that influence knowledge, attitudes, and beliefs about MAiD were evident from the literature: (i) endogenous or individual factors such as age, gender, personally held values, religion, religiosity, and/or spirituality [ 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 ], (ii) experience with death and dying in personal and/or professional life [ 35 , 40 , 41 , 43 , 44 , 45 ], (iii) training including curricular instruction about clinical role, scope of practice, or the law [ 23 , 36 , 39 ], and (iv) exogenous or social factors such as the influence of key leaders, colleagues, friends and/or family, professional and licensure organizations, support within professional settings, and/or engagement in MAiD in an interdisciplinary team context [ 9 , 35 , 46 ].

Studies of nursing students also suggest overlap across these categories. For example, value for patient autonomy [ 23 ] and the moral complexity of decision-making [ 37 ] are important factors that contribute to attitudes about MAiD and may stem from a blend of personally held values coupled with curricular content, professional training and norms, and clinical exposure. For example, students report that participation in end of life care allows for personal growth, shifts in perception, and opportunities to build therapeutic relationships with their clients [ 44 , 47 , 48 ].

Preliminary items generated from the literature resulted in 56 questions from 11 published sources (See Table  3 ). These items were constructed across four main categories: (i) socio-demographic questions; (ii) end of life care questions; (iii) knowledge about MAiD; or (iv) comfort and willingness to participate in MAiD. Knowledge questions were refined to reflect current MAiD legislation, policies, and regulatory frameworks. Falconer [ 39 ] and Freeman [ 45 ] studies were foundational sources for item selection. Additionally, four case studies were written to reflect the most recent anticipated changes to MAiD legislation and all used the same open-ended core questions to address respondents’ perspectives about the patient’s right to make the decision, comfort in assisting a physician or NP to administer MAiD in that scenario, and hypothesized comfort about serving as a primary provider if qualified as an NP in future. Response options for the survey were also constructed during this stage and included: open text, categorical, yes/no , and Likert scales.

Phase 2: faculty expert panel review

Of the 56 items presented to the faculty panel, 54 questions reached 75% consensus. However, based upon the qualitative responses 9 items were removed largely because they were felt to be repetitive. Items that generated the most controversy were related to measuring religion and spirituality in the Canadian context, defining end of life care when there is no agreed upon time frames (e.g., last days, months, or years), and predicting willingness to be involved in a future events – thus predicting their future selves. Phase 2, round 1 resulted in an initial set of 47 items which were then presented back to the faculty panel in round 2.

Of the 47 initial questions presented to the panel in round 2, 45 reached a level of consensus of 75% or greater, and 34 of these questions reached a level of 100% consensus [ 27 ] of which all participants chose to include without any adaptations) For each question, level of importance was determined based on a 5-point Likert scale (1 = very unimportant, 2 = somewhat unimportant, 3 = neutral, 4 = somewhat important, and 5 = very important). Figure  2 provides an overview of the level of importance assigned to each item.

figure 2

Ranking level of importance for survey items

After round 2, a careful analysis of participant comments and level of importance was completed by the research team. While the main method of survey item development came from participants’ response to the first round of Delphi consensus ratings, level of importance was used to assist in the decision of whether to keep or modify questions that created controversy, or that rated lower in the include/exclude/adapt portion of the Delphi. Survey items that rated low in level of importance included questions about future roles, sex and gender, and religion/spirituality. After deliberation by the research committee, these questions were retained in the survey based upon the importance of these variables in the scientific literature.

Of the 47 questions remaining from Phase 2, round 2, four were revised. In addition, the two questions that did not meet the 75% cut off level for consensus were reviewed by the research team. The first question reviewed was What is your comfort level with providing a MAiD death in the future if you were a qualified NP ? Based on a review of participant comments, it was decided to retain this question for the cognitive interviews with students in the final phase of testing. The second question asked about impacts on respondents’ views of MAiD and was changed from one item with 4 subcategories into 4 separate items, resulting in a final total of 51 items for phase 3. The revised survey was then brought forward to the cognitive interviews with student participants in Phase 3. (see Supplementary Material 1 for a complete description of item modification during round 2).

Phase 3. Outcomes of cognitive interview focus group

Of the 51 items reviewed by student participants, 29 were identified as clear with little or no discussion. Participant comments for the remaining 22 questions were noted and verified against the audio recording. Following content analysis of the comments, four key themes emerged through the student discussion: unclear or ambiguous wording; difficult to answer questions; need for additional response options; and emotional response evoked by questions. An example of unclear or ambiguous wording was a request for clarity in the use of the word “sufficient” in the context of assessing an item that read “My nursing education has provided sufficient content about the nursing role in MAiD.” “Sufficient” was viewed as subjective and “laden with…complexity that distracted me from the question.” The group recommended rewording the item to read “My nursing education has provided enough content for me to care for a patient considering or requesting MAiD.”

An example of having difficulty answering questions related to limited knowledge related to terms used in the legislation such as such as safeguards , mature minor , eligibility criteria , and conscientious objection. Students were unclear about what these words meant relative to the legislation and indicated that this lack of clarity would hamper appropriate responses to the survey. To ensure that respondents are able to answer relevant questions, student participants recommended that the final survey include explanation of key terms such as mature minor and conscientious objection and an overview of current legislation.

Response options were also a point of discussion. Participants noted a lack of distinction between response options of unsure and unable to say . Additionally, scaling of attitudes was noted as important since perspectives about MAiD are dynamic and not dichotomous “agree or disagree” responses. Although the faculty expert panel recommended the integration of the demographic variables of religious and/or spiritual remain as a single item, the student group stated a preference to have religion and spirituality appear as separate items. The student focus group also took issue with separate items for the variables of sex and gender, specifically that non-binary respondents might feel othered or “outed” particularly when asked to identify their sex. These variables had been created based upon best practices in health research but students did not feel they were appropriate in this context [ 49 ]. Finally, students agreed with the faculty expert panel in terms of the complexity of projecting their future involvement as a Nurse Practitioner. One participant stated: “I certainly had to like, whoa, whoa, whoa. Now let me finish this degree first, please.” Another stated, “I'm still imagining myself, my future career as an RN.”

Finally, student participants acknowledged the array of emotions that some of the items produced for them. For example, one student described positive feelings when interacting with the survey. “Brought me a little bit of feeling of joy. Like it reminded me that this is the last piece of independence that people grab on to.” Another participant, described the freedom that the idea of an advance request gave her. “The advance request gives the most comfort for me, just with early onset Alzheimer’s and knowing what it can do.” But other participants described less positive feelings. For example, the mature minor case study yielded a comment: “This whole scenario just made my heart hurt with the idea of a child requesting that.”

Based on the data gathered from the cognitive interview focus group of nursing students, revisions were made to 11 closed-ended questions (see Table  4 ) and 3 items were excluded. In the four case studies, the open-ended question related to a respondents’ hypothesized actions in a future role as NP were removed. The final survey consists of 45 items including 4 case studies (see Supplementary Material 3 ).

The aim of this study was to develop and validate a survey that can be used to track the growth of knowledge about MAiD among nursing students over time, inform training programs about curricular needs, and evaluate attitudes and willingness to participate in MAiD at time-points during training or across nursing programs over time.

The faculty expert panel and student participants in the cognitive interview focus group identified a need to establish core knowledge of the terminology and legislative rules related to MAiD. For example, within the cognitive interview group of student participants, several acknowledged lack of clear understanding of specific terms such as “conscientious objector” and “safeguards.” Participants acknowledged discomfort with the uncertainty of not knowing and their inclination to look up these terms to assist with answering the questions. This survey can be administered to nursing or pre-nursing students at any phase of their training within a program or across training programs. However, in doing so it is important to acknowledge that their baseline knowledge of MAiD will vary. A response option of “not sure” is important and provides a means for respondents to convey uncertainty. If this survey is used to inform curricular needs, respondents should be given explicit instructions not to conduct online searches to inform their responses, but rather to provide an honest appraisal of their current knowledge and these instructions are included in the survey (see Supplementary Material 3 ).

Some provincial regulatory bodies have established core competencies for entry-level nurses that include MAiD. For example, the BC College of Nurses and Midwives (BCCNM) requires “knowledge about ethical, legal, and regulatory implications of medical assistance in dying (MAiD) when providing nursing care.” (10 p. 6) However, across Canada curricular content and coverage related to end of life care and MAiD is variable [ 23 ]. Given the dynamic nature of the legislation that includes portions of the law that are embargoed until 2024, it is important to ensure that respondents are guided by current and accurate information. As the law changes, nursing curricula, and public attitudes continue to evolve, inclusion of core knowledge and content is essential and relevant for investigators to be able to interpret the portions of the survey focused on attitudes and beliefs about MAiD. Content knowledge portions of the survey may need to be modified over time as legislation and training change and to meet the specific purposes of the investigator.

Given the sensitive nature of the topic, it is strongly recommended that surveys be conducted anonymously and that students be provided with an opportunity to discuss their responses to the survey. A majority of feedback from both the expert panel of faculty and from student participants related to the wording and inclusion of demographic variables, in particular religion, religiosity, gender identity, and sex assigned at birth. These and other demographic variables have the potential to be highly identifying in small samples. In any instance in which the survey could be expected to yield demographic group sizes less than 5, users should eliminate the demographic variables from the survey. For example, the profession of nursing is highly dominated by females with over 90% of nurses who identify as female [ 50 ]. Thus, a survey within a single class of students or even across classes in a single institution is likely to yield a small number of male respondents and/or respondents who report a difference between sex assigned at birth and gender identity. When variables that serve to identify respondents are included, respondents are less likely to complete or submit the survey, to obscure their responses so as not to be identifiable, or to be influenced by social desirability bias in their responses rather than to convey their attitudes accurately [ 51 ]. Further, small samples do not allow for conclusive analyses or interpretation of apparent group differences. Although these variables are often included in surveys, such demographics should be included only when anonymity can be sustained. In small and/or known samples, highly identifying variables should be omitted.

There are several limitations associated with the development of this survey. The expert panel was comprised of faculty who teach nursing students and are knowledgeable about MAiD and curricular content, however none identified as a conscientious objector to MAiD. Ideally, our expert panel would have included one or more conscientious objectors to MAiD to provide a broader perspective. Review by practitioners who participate in MAiD, those who are neutral or undecided, and practitioners who are conscientious objectors would ensure broad applicability of the survey. This study included one student cognitive interview focus group with 5 self-selected participants. All student participants had held discussions about end of life care with at least one patient, 4 of 5 participants had worked with a patient who requested MAiD, and one had been present for a MAiD death. It is not clear that these participants are representative of nursing students demographically or by experience with end of life care. It is possible that the students who elected to participate hold perspectives and reflections on patient care and MAiD that differ from students with little or no exposure to end of life care and/or MAiD. However, previous studies find that most nursing students have been involved with end of life care including meaningful discussions about patients’ preferences and care needs during their education [ 40 , 44 , 47 , 48 , 52 ]. Data collection with additional student focus groups with students early in their training and drawn from other training contexts would contribute to further validation of survey items.

Future studies should incorporate pilot testing with small sample of nursing students followed by a larger cross-program sample to allow evaluation of the psychometric properties of specific items and further refinement of the survey tool. Consistent with literature about the importance of leadership in the context of MAiD [ 12 , 53 , 54 ], a study of faculty knowledge, beliefs, and attitudes toward MAiD would provide context for understanding student perspectives within and across programs. Additional research is also needed to understand the timing and content coverage of MAiD across Canadian nurse training programs’ curricula.

The implementation of MAiD is complex and requires understanding of the perspectives of multiple stakeholders. Within the field of nursing this includes clinical providers, educators, and students who will deliver clinical care. A survey to assess nursing students’ attitudes toward and willingness to participate in MAiD in the Canadian context is timely, due to the legislation enacted in 2016 and subsequent modifications to the law in 2021 with portions of the law to be enacted in 2027. Further development of this survey could be undertaken to allow for use in settings with practicing nurses or to allow longitudinal follow up with students as they enter practice. As the Canadian landscape changes, ongoing assessment of the perspectives and needs of health professionals and students in the health professions is needed to inform policy makers, leaders in practice, curricular needs, and to monitor changes in attitudes and practice patterns over time.

Availability of data and materials

The datasets used and/or analysed during the current study are not publicly available due to small sample sizes, but are available from the corresponding author on reasonable request.

Abbreviations

British Columbia College of Nurses and Midwives

Medical assistance in dying

Nurse practitioner

Registered nurse

University of British Columbia Okanagan

Nicol J, Tiedemann M. Legislative Summary: Bill C-14: An Act to amend the Criminal Code and to make related amendments to other Acts (medical assistance in dying). Available from: https://lop.parl.ca/staticfiles/PublicWebsite/Home/ResearchPublications/LegislativeSummaries/PDF/42-1/c14-e.pdf .

Downie J, Scallion K. Foreseeably unclear. The meaning of the “reasonably foreseeable” criterion for access to medical assistance in dying in Canada. Dalhousie Law J. 2018;41(1):23–57.

Nicol J, Tiedeman M. Legislative summary of Bill C-7: an act to amend the criminal code (medical assistance in dying). Ottawa: Government of Canada; 2021.

Google Scholar  

Council of Canadian Academies. The state of knowledge on medical assistance in dying where a mental disorder is the sole underlying medical condition. Ottawa; 2018. Available from: https://cca-reports.ca/wp-content/uploads/2018/12/The-State-of-Knowledge-on-Medical-Assistance-in-Dying-Where-a-Mental-Disorder-is-the-Sole-Underlying-Medical-Condition.pdf .

Council of Canadian Academies. The state of knowledge on advance requests for medical assistance in dying. Ottawa; 2018. Available from: https://cca-reports.ca/wp-content/uploads/2019/02/The-State-of-Knowledge-on-Advance-Requests-for-Medical-Assistance-in-Dying.pdf .

Council of Canadian Academies. The state of knowledge on medical assistance in dying for mature minors. Ottawa; 2018. Available from: https://cca-reports.ca/wp-content/uploads/2018/12/The-State-of-Knowledge-on-Medical-Assistance-in-Dying-for-Mature-Minors.pdf .

Health Canada. Third annual report on medical assistance in dying in Canada 2021. Ottawa; 2022. [cited 2023 Oct 23]. Available from: https://www.canada.ca/en/health-canada/services/medical-assistance-dying/annual-report-2021.html .

Banner D, Schiller CJ, Freeman S. Medical assistance in dying: a political issue for nurses and nursing in Canada. Nurs Philos. 2019;20(4): e12281.

Article   PubMed   Google Scholar  

Pesut B, Thorne S, Stager ML, Schiller CJ, Penney C, Hoffman C, et al. Medical assistance in dying: a review of Canadian nursing regulatory documents. Policy Polit Nurs Pract. 2019;20(3):113–30.

Article   PubMed   PubMed Central   Google Scholar  

College of Registered Nurses of British Columbia. Scope of practice for registered nurses [Internet]. Vancouver; 2018. Available from: https://www.bccnm.ca/Documents/standards_practice/rn/RN_ScopeofPractice.pdf .

Pesut B, Thorne S, Schiller C, Greig M, Roussel J, Tishelman C. Constructing good nursing practice for medical assistance in dying in Canada: an interpretive descriptive study. Global Qual Nurs Res. 2020;7:2333393620938686. https://doi.org/10.1177/2333393620938686 .

Article   Google Scholar  

Pesut B, Thorne S, Schiller CJ, Greig M, Roussel J. The rocks and hard places of MAiD: a qualitative study of nursing practice in the context of legislated assisted death. BMC Nurs. 2020;19:12. https://doi.org/10.1186/s12912-020-0404-5 .

Pesut B, Greig M, Thorne S, Burgess M, Storch JL, Tishelman C, et al. Nursing and euthanasia: a narrative review of the nursing ethics literature. Nurs Ethics. 2020;27(1):152–67.

Pesut B, Thorne S, Storch J, Chambaere K, Greig M, Burgess M. Riding an elephant: a qualitative study of nurses’ moral journeys in the context of Medical Assistance in Dying (MAiD). Journal Clin Nurs. 2020;29(19–20):3870–81.

Lamb C, Babenko-Mould Y, Evans M, Wong CA, Kirkwood KW. Conscientious objection and nurses: results of an interpretive phenomenological study. Nurs Ethics. 2018;26(5):1337–49.

Wright DK, Chan LS, Fishman JR, Macdonald ME. “Reflection and soul searching:” Negotiating nursing identity at the fault lines of palliative care and medical assistance in dying. Social Sci & Med. 2021;289: 114366.

Beuthin R, Bruce A, Scaia M. Medical assistance in dying (MAiD): Canadian nurses’ experiences. Nurs Forum. 2018;54(4):511–20.

Bruce A, Beuthin R. Medically assisted dying in Canada: "Beautiful Death" is transforming nurses' experiences of suffering. The Canadian J Nurs Res | Revue Canadienne de Recherche en Sci Infirmieres. 2020;52(4):268–77. https://doi.org/10.1177/0844562119856234 .

Canadian Nurses Association. Code of ethics for registered nurses. Ottawa; 2017. Available from: https://www.cna-aiic.ca/en/nursing/regulated-nursing-in-canada/nursing-ethics .

Canadian Nurses Association. National nursing framework on Medical Assistance in Dying in Canada. Ottawa: 2017. Available from: https://www.virtualhospice.ca/Assets/cna-national-nursing-framework-on-maidEng_20170216155827.pdf .

Pesut B, Thorne S, Greig M. Shades of gray: conscientious objection in medical assistance in dying. Nursing Inq. 2020;27(1): e12308.

Durojaiye A, Ryan R, Doody O. Student nurse education and preparation for palliative care: a scoping review. PLoS ONE. 2023. https://doi.org/10.1371/journal.pone.0286678 .

McMechan C, Bruce A, Beuthin R. Canadian nursing students’ experiences with medical assistance in dying | Les expériences d’étudiantes en sciences infirmières au regard de l’aide médicale à mourir. Qual Adv Nurs Educ - Avancées en Formation Infirmière. 2019;5(1). https://doi.org/10.17483/2368-6669.1179 .

Adler M, Ziglio E. Gazing into the oracle. The Delphi method and its application to social policy and public health. London: Jessica Kingsley Publishers; 1996

Keeney S, Hasson F, McKenna H. Consulting the oracle: ten lessons from using the Delphi technique in nursing research. J Adv Nurs. 2006;53(2):205–12.

Keeney S, Hasson F, McKenna H. The Delphi technique in nursing and health research. 1st ed. City: Wiley; 2011.

Willis GB. Cognitive interviewing: a tool for improving questionnaire design. 1st ed. Thousand Oaks, Calif: Sage; 2005. ISBN: 9780761928041

Lamb C, Evans M, Babenko-Mould Y, Wong CA, Kirkwood EW. Conscience, conscientious objection, and nursing: a concept analysis. Nurs Ethics. 2017;26(1):37–49.

Lamb C, Evans M, Babenko-Mould Y, Wong CA, Kirkwood K. Nurses’ use of conscientious objection and the implications of conscience. J Adv Nurs. 2018;75(3):594–602.

de Vaus D. Surveys in social research. 6th ed. Abingdon, Oxon: Routledge; 2014.

Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: A primer. Front Public Health. 2018;6:149. https://doi.org/10.3389/fpubh.2018.00149 .

Puchta C, Potter J. Focus group practice. 1st ed. London: Sage; 2004.

Book   Google Scholar  

Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015.

Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Adesina O, DeBellis A, Zannettino L. Third-year Australian nursing students’ attitudes, experiences, knowledge, and education concerning end-of-life care. Int J of Palliative Nurs. 2014;20(8):395–401.

Bator EX, Philpott B, Costa AP. This moral coil: a cross-sectional survey of Canadian medical student attitudes toward medical assistance in dying. BMC Med Ethics. 2017;18(1):58.

Beuthin R, Bruce A, Scaia M. Medical assistance in dying (MAiD): Canadian nurses’ experiences. Nurs Forum. 2018;53(4):511–20.

Brown J, Goodridge D, Thorpe L, Crizzle A. What is right for me, is not necessarily right for you: the endogenous factors influencing nonparticipation in medical assistance in dying. Qual Health Res. 2021;31(10):1786–1800.

Falconer J, Couture F, Demir KK, Lang M, Shefman Z, Woo M. Perceptions and intentions toward medical assistance in dying among Canadian medical students. BMC Med Ethics. 2019;20(1):22.

Green G, Reicher S, Herman M, Raspaolo A, Spero T, Blau A. Attitudes toward euthanasia—dual view: Nursing students and nurses. Death Stud. 2022;46(1):124–31.

Hosseinzadeh K, Rafiei H. Nursing student attitudes toward euthanasia: a cross-sectional study. Nurs Ethics. 2019;26(2):496–503.

Ozcelik H, Tekir O, Samancioglu S, Fadiloglu C, Ozkara E. Nursing students’ approaches toward euthanasia. Omega (Westport). 2014;69(1):93–103.

Canning SE, Drew C. Canadian nursing students’ understanding, and comfort levels related to medical assistance in dying. Qual Adv Nurs Educ - Avancées en Formation Infirmière. 2022;8(2). https://doi.org/10.17483/2368-6669.1326 .

Edo-Gual M, Tomás-Sábado J, Bardallo-Porras D, Monforte-Royo C. The impact of death and dying on nursing students: an explanatory model. J Clin Nurs. 2014;23(23–24):3501–12.

Freeman LA, Pfaff KA, Kopchek L, Liebman J. Investigating palliative care nurse attitudes towards medical assistance in dying: an exploratory cross-sectional study. J Adv Nurs. 2020;76(2):535–45.

Brown J, Goodridge D, Thorpe L, Crizzle A. “I am okay with it, but I am not going to do it:” the exogenous factors influencing non-participation in medical assistance in dying. Qual Health Res. 2021;31(12):2274–89.

Dimoula M, Kotronoulas G, Katsaragakis S, Christou M, Sgourou S, Patiraki E. Undergraduate nursing students’ knowledge about palliative care and attitudes towards end-of-life care: A three-cohort, cross-sectional survey. Nurs Educ Today. 2019;74:7–14.

Matchim Y, Raetong P. Thai nursing students’ experiences of caring for patients at the end of life: a phenomenological study. Int J Palliative Nurs. 2018;24(5):220–9.

Canadian Institute for Health Research. Sex and gender in health research [Internet]. Ottawa: CIHR; 2021 [cited 2023 Oct 23]. Available from: https://cihr-irsc.gc.ca/e/50833.html .

Canadian Nurses’ Association. Nursing statistics. Ottawa: CNA; 2023 [cited 2023 Oct 23]. Available from: https://www.cna-aiic.ca/en/nursing/regulated-nursing-in-canada/nursing-statistics .

Krumpal I. Determinants of social desirability bias in sensitive surveys: a literature review. Qual Quant. 2013;47(4):2025–47. https://doi.org/10.1007/s11135-011-9640-9 .

Ferri P, Di Lorenzo R, Stifani S, Morotti E, Vagnini M, Jiménez Herrera MF, et al. Nursing student attitudes toward dying patient care: a European multicenter cross-sectional study. Acta Bio Medica Atenei Parmensis. 2021;92(S2): e2021018.

PubMed   PubMed Central   Google Scholar  

Beuthin R, Bruce A. Medical assistance in dying (MAiD): Ten things leaders need to know. Nurs Leadership. 2018;31(4):74–81.

Thiele T, Dunsford J. Nurse leaders’ role in medical assistance in dying: a relational ethics approach. Nurs Ethics. 2019;26(4):993–9.

Download references

Acknowledgements

We would like to acknowledge the faculty and students who generously contributed their time to this work.

JS received a student traineeship through the Principal Research Chairs program at the University of British Columbia Okanagan.

Author information

Authors and affiliations.

School of Health and Human Services, Selkirk College, Castlegar, BC, Canada

Jocelyn Schroeder & Barbara Pesut

School of Nursing, University of British Columbia Okanagan, Kelowna, BC, Canada

Barbara Pesut, Lise Olsen, Nelly D. Oelke & Helen Sharp

You can also search for this author in PubMed   Google Scholar

Contributions

JS made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and drafting and substantively revising the work. JS has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. BP made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and drafting and substantively revising the work. BP has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. LO made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and substantively revising the work. LO has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. NDO made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and substantively revising the work. NDO has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. HS made substantial contributions to drafting and substantively revising the work. HS has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.

Authors’ information

JS conducted this study as part of their graduate requirements in the School of Nursing, University of British Columbia Okanagan.

Corresponding author

Correspondence to Barbara Pesut .

Ethics declarations

Ethics approval and consent to participate.

The research was approved by the Selkirk College Research Ethics Board (REB) ID # 2021–011 and the University of British Columbia Behavioral Research Ethics Board ID # H21-01181.

All participants provided written and informed consent through approved consent processes. Research was conducted in accordance with the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Schroeder, J., Pesut, B., Olsen, L. et al. Developing a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in Medical Assistance in Dying (MAiD): a mixed method modified e-Delphi study. BMC Nurs 23 , 326 (2024). https://doi.org/10.1186/s12912-024-01984-z

Download citation

Received : 24 October 2023

Accepted : 28 April 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s12912-024-01984-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical assistance in dying (MAiD)
  • End of life care
  • Student nurses
  • Nursing education

BMC Nursing

ISSN: 1472-6955

critical case study evaluation

Evaluation of online job portals for HR recruitment selection using AHP in two wheeler automotive industry: a case study

  • ORIGINAL ARTICLE
  • Published: 12 May 2024

Cite this article

critical case study evaluation

  • S. M. Vadivel   ORCID: orcid.org/0000-0002-5287-3693 1 &
  • Rohan Sunny   ORCID: orcid.org/0009-0002-2347-3081 2  

38 Accesses

Explore all metrics

Automotive companies are booming worldwide in the economy. In order to sustain in the highly competitive world, every organization tries to create itself a trademark in the market. In our research, we looked at how two wheelers automotive company's selection enhances an organizational performance, which ensures the company's future growth. In today's fast-paced, globally integrated world, human resources are one of the most important production variables. It is critical to preserve and improve economic competitiveness by properly selecting and developing these resources. The main aim of this study is to identify the best online job portal website for recruitment at Two Wheeler Company and to suggest an HR strategy which resonates company’s values and culture. In this study, we have selected 6 criteria and 6 online popular job portals for recruitment with a sample of 15 candidates have been selected. Findings reveal that, AHP method has significant results on the selection of best employer, which helps HR Manager to finalize the decision making process/strategies. Towards the managerial implications section, the researcher aims to design an functional and effective HR strategy that can grasp, engage and retain the top talent in the organization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

critical case study evaluation

Similar content being viewed by others

critical case study evaluation

Reskilling and Upskilling the Future-ready Workforce for Industry 4.0 and Beyond

critical case study evaluation

Exploring Human Resource Management Digital Transformation in the Digital Age

critical case study evaluation

Work engagement and employee satisfaction in the practice of sustainable human resource management – based on the study of Polish employees

Availability of data and material.

'Not applicable' in this section.

Abbreviations

Analytic hierarchy process

Artificial intelligence

Analysis of variance

Chief Human Resources Officer

Consistency index

Curriculum vitae

Consistency ratio

Decision making

Faculty Development Programme

Hierarchical linear modelling

Human resources

Research and Development

Randomized index

Structural equation modelling

Search engine optimization

Triple bottom line

Technique for order preference by similarity

Maximum Eigen value

The normalized value of ith criterion for the jth alternative

The normalized value of jth criterion for the ith alternative

The number of alternatives for a certain MCDM problem

The number of criteria for a certain MCDM problem

Avinash Kapse S, Vishal Patil S, Nikhil PV (2021) E-Recruitment. Int J Eng Adv Technol (IJEAT) 1(4):82–86

Google Scholar  

Chahar B, Jain SR, Hatwal V (2021) Mediating role of employee motivation for training, commitment, retention, and performance in higher education institutions. Probl Perspect Manag 19(3):95

Chauhan P (2019) Impact of training and development programs on motivation of employees in “A” graded commercial bank of Nepal. Int J Res Anal Rev 6(3):850–857

Elsafty A, Oraby M (2022) The impact of training on employee retention: an empirical research on the Private Sector in Egypt. Int J Bus Manage 17(5):58–74

Article   Google Scholar  

Habibie M, Mustika I (2020) The effect of training on work motivation and its impact on employee performance (Case Study at BPJS Ketenagakerjaan Headquarters). Int J Innovat Sci Res Technol 5(7):51–57

Haryono S, Supardi S, Udin U (2020) The effect of training and job promotion on work motivation and its implications on job performance: evidence from Indonesia. Manage Sci Lett 10(9):2107–2112

Horodyski P (2023a) Recruiter’s perception of artificial intelligence (AI)-based tools in recruitment. Comp Human Behav Reports 10:100298

Horodyski P (2023b) Applicants’ perception of artificial intelligence in the recruitment process. Comp Human Behav Reports 11:100303

Jack Walker H, Feild HS, Giles WF, Bernerth JB, Short JC (2011) ‘so what do you think of the organization? a contextual priming explanation for recruitment web site characteristics as antecedents of job seekers’ organizational image perceptions. Organ Behav Human Decis Process 2(2011):165–178

Khan N (2018) Does training & employee commitment predict employee retention. In: International Conference on Management and Information Systems (Vol. 21, pp. 120–124)

Lee I (2005) The evolution of E-Recruiting. A content analysis of Fortune 100 Career Websites. J Electronic Commerce Organ 3(3):57–68

Lievens F, Harris MM (2003) Research on Internet recruiting and testing: current status and future directions. Int Rev Ind Organ Psychol 16:131–165

Lin CY, Huang CK (2021) Employee turnover intentions and job performance from a planned change: the effects of an organizational learning culture and job satisfaction. Int J Manpow 42(3):409–423

Martins D, Diaconescu LM (2014) Expatriates recruitment and selection for long-term international assignments in Portuguese companies. Tékhne 12:48–57

RoyChowdhury T, Srimannarayana M (2013) Applicants’ perceptions on online recruitment procedures. Manage Labour Stud 38(3):185–199

Ryu G, Moon SG (2019) The effect of actual workplace learning on job satisfaction and organizational commitment: The moderating role of intrinsic learning motive. J Workplace Learn 31(8):481–497

Saaty TL (1990) How to make a decision: the analytic hierarchy process. Eur J Oper Res 48(1):9–26

Sengazhani Murugesan V, Sequeira AH, Shetty DS, Jauhar SK (2020) Enhancement of mail operational performance of India post facility layout using AHP. Int J Syst Assur Eng Manage 11(2):261–273

Sigalingging H, Pakpahan ME (2021) The Effect of training and work environment on employee performance with motivation as an intervening variable At PT Intraco Agroindustry. South East Asia Journal of Contemporary Business. Econ Law 24(6):130–139.

Sharawat K, Dubey SK (2018) An approach to vendor selection on usability basis by AHP and fuzzy topsis method. In: Soft computing: theories and applications: proceedings of SoCTA 2016, vol 2. Springer, Singapore, pp 595–604

Steil AV, de Cuffa D, Iwaya GH, Pacheco RCDS (2020) Perceived learning opportunities, behavioral intentions and employee retention in technology organizations. J Work Learn 32(2):147–159

Sugiarti E (2022) The influence of training, work environment and career development on work motivation that has an impact on employee performance at PT. Suryamas Elsindo Primatama In West Jakarta. Int J Artif Intell Res 6(1.2).

Sumrit D (2020) Supplier selection for vendor-managed inventory in healthcare using fuzzy multi-criteria decision-making approach. Decis Sci Lett 9(2):233–256

Thompson LF, Braddy PW, Wuensch KL (2008) E-recruitment and the benefits of organizational web appeal. Comp Human Behav 24(5):2384–2398

Turan FK, Scala NM, Sacre MB, Needy KL (2009) An Analytic Network Process (ANP) approach to the project portfolio management for Organizational Sustainability. In: Proceedings of the Industrial Engineering Research Conference. Institute of Industrial Engineers.

Wadhawan S, Sinha S (2018) Factors Influencing Young Job Seekers Perception towards Job Portals. AIMS Int J Manage 12(3).

Weerarathna RS, Somawardana WSD (2021) Impact on training and employee motivation in an electricity company. Future Work, 497.

Zusman R, Landis R (2002) Applicant preferences for Web-based versus traditional job postings. Comput Hum Behav 18(3):285–329

Download references

Acknowledgements

The authors would like to express their gratitude to two wheeler Automotive Industries in Chennai, Tamil Nadu, India, for their invaluable assistance and cooperation. We greatly acknowledge Ms. Ruchi Mishra, Research scholar from NIT Karnataka, for editing this manuscript in better form.

There is no funding provided in this research.

Author information

Authors and affiliations.

Operations Management Division, Vellore Institute of Technology Chennai, Vandalur-Kelambakkam Road, Chennai, 600127, India

S. M. Vadivel

Vellore Institute of Technology Chennai, Vandalur-Kelambakkam Road, Chennai, 600127, India

Rohan Sunny

You can also search for this author in PubMed   Google Scholar

Contributions

S M Vadivel: Methodology, Writing—review & editing, Supervision. Rohan Sunny: Data Curation, Writing—original draft preparation.

Corresponding author

Correspondence to S. M. Vadivel .

Ethics declarations

Conflict of interests.

The authors declare that they have no competing interests.

Ethics approval and consent to participate

This manuscript has a research study involves human participants (Interview Candidates) for studying job portal evaluations in Indian two wheeler company running in Chennai, Tamil Nadu.

Consent for publication

‘ Not applicable’ in this section.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Vadivel, S.M., Sunny, R. Evaluation of online job portals for HR recruitment selection using AHP in two wheeler automotive industry: a case study. Int J Syst Assur Eng Manag (2024). https://doi.org/10.1007/s13198-024-02358-z

Download citation

Received : 04 December 2023

Revised : 07 April 2024

Accepted : 25 April 2024

Published : 12 May 2024

DOI : https://doi.org/10.1007/s13198-024-02358-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Analytical hierarchy process (AHP)
  • Online job portals
  • Automobile two wheeler industries
  • Artificial intelligence (AI)
  • HR recruitment
  • Find a journal
  • Publish with us
  • Track your research

The University of Melbourne

Soanes, K., Rytwinski, T., Fahrig, L., Huijser, M.P., Jaeger, J.A.G., Teixeira, F.T., van der Red, R., & van der Grift, E.A. (2024) Data from: Do wildlife crossing structures mitigate the barrier effect of roads on animal movement? A global assessment .

Dataset supporting the paper "Do wildlife crossing structures mitigate the barrier effect of roads on animal movement? A global assessment" accepted for publication in Journal of Applied Ecology, 2024 (Soanes et al). [paper to be linked following publication]

Includes the 313 studies investigating the effect of wildlife crossing structures on animal movement across the road, as well as data supporting the creation of Figure 5 (state of knowledge table) and Figure 6 (quantitative assessment of change in movement). Each study was categorised into one of four evaluation types: prevent movement decline, restore movement, improve movement, and allow movement.

Study abstract:

1. The widespread impacts of roads on animal movement have led to the search for innovative mitigation tools. Wildlife crossing structures (tunnels or bridges) are a common approach, however their effectiveness remains unclear beyond isolated case studies.

2. We conduct an extensive literature review and synthesis to address the question: What is the evidence that wildlife crossing structures mitigate the barrier effect of roads on wildlife movement? In particular we investigated whether wildlife crossing structures prevented an expected decline in cross-road movement, restored movement to pre-construction conditions, or improved movement relative to taking no action.

3. In an analysis of 313 studies, only 14% evaluated whether wildlife crossing structures resulted in a change in animal movement across roads. We identified critical problems in existing studies, especially the lack of benchmarks (e.g., pre-road, pre-mitigation or control data) and the use of biased comparisons.

4. Wildlife crossing structures allowed cross-road movement in 98% of datasets and improved movement in ~60%. In contrast, the decline of wildlife movement was prevented in fewer than 40% of cases. For most structure types and species groups there was insufficient evidence to draw generalisable conclusions.

5. Synthesis and Applications: The evidence to date suggests that wildlife crossing structures can mitigate the barrier effect of roads on wildlife movement, but in many cases have been poorly implemented or evaluated. The most supported measures were: the addition of ledges and vegetation cover to increase movement for small mammals; underpasses to prevent the decline in movement of ungulates following road construction; and improving road-crossing for arboreal mammals using canopy bridges and vegetated medians. We strongly recommend that future use of crossing structures closely adhere to species-specific, best-practice guidelines to improve implementation, and be paired with a thorough evaluation that includes benchmark comparisons, particularly for measures and species that lack sufficient evidence (e.g. invertebrates, amphibians, reptiles, birds, and overpasses).

Usage metrics

University of Melbourne

  • Conservation and biodiversity
  • Environmental management
  • Wildlife and habitat management
  • Environmental rehabilitation and restoration

CC BY 4.0

  • Open access
  • Published: 14 May 2024

Quantitative evaluation of vertical control in orthodontic camouflage treatment for skeletal class II with hyperdivergent facial type

  • Yan-Ning Guo 1 , 2   na1 ,
  • Sheng-Jie Cui 1   na1 ,
  • Jie-Ni Zhang 1 ,
  • Yan-Heng Zhou 1 &
  • Xue-Dong Wang 1  

Head & Face Medicine volume  20 , Article number:  31 ( 2024 ) Cite this article

Metrics details

In this study, we sought to quantify the influence of vertical control assisted by a temporary anchorage device (TAD) on orthodontic treatment efficacy for skeletal class II patients with a hyperdivergent facial type and probe into the critical factors of profile improvement.

A total of 36 adult patients with skeletal class II and a hyperdivergent facial type were included in this retrospective case–control study. To exclude the effect of sagittal anchorage reinforcement, the patients were divided into two groups: a maxillary maximum anchorage (MMA) group ( N  = 17), in which TADs were only used to help with anterior tooth retraction, and the MMA with vertical control (MMA + VC) group ( N  = 19), for which TADs were also used to intrude the maxillary molars and incisors. The treatment outcome was evaluated using dental, skeletal, and soft-tissue-related parameters via a cephalometric analysis and cast superimposition.

A significant decrease in ANB ( P  < 0.05 for both groups), the retraction and uprighting of the maxillary and mandibular incisors, and the retraction of protruded upper and lower lips were observed in both groups. Moreover, a significant intrusion of the maxillary molars was observed via the cephalometric analysis (− 1.56 ± 1.52 mm, P  < 0.05) and cast superimposition (− 2.25 ± 1.03 mm, P  < 0.05) of the MMA + VC group but not the MMA group, which resulted in a remarkable decrease in the mandibular plane angle (− 1.82 ± 1.38°, P  < 0.05). The Z angle (15.25 ± 5.30°, P  < 0.05) and Chin thickness (− 0.97 ± 0.45°, P  < 0.05) also improved dramatically in the MMA + VC group, indicating a better profile and a relaxed mentalis. Multivariate regression showed that the improvement in the soft tissue was closely related to the counterclockwise rotation of the mandible plane ( P  < 0.05).

Conclusions

TAD-assisted vertical control can achieve intrusion of approximately 2 mm for the upper first molars and induce mandibular counterclockwise rotation of approximately 1.8°. Moreover, it is especially important for patients without sufficient retraction of the upper incisors or a satisfactory chin shape.

Peer Review reports

For adult patients with severe class II malocclusion accompanied by a hyperdivergent growth pattern, orthognathic surgery is usually the optimal therapy to improve facial aesthetics and masticatory function [ 1 , 2 ]. Nevertheless, some patients refuse surgery due to its possible risks and high cost. Orthodontic camouflage treatment provides an alternative for such patients [ 3 , 4 ].

To improve the profile of this kind of patient, both sagittal retraction and vertical control are important. Several studies have found and confirmed the importance of vertical control in orthodontic treatment for skeletal class II malocclusion [ 5 , 6 , 7 ]. However, varying treatment methods are used. For adolescent patients, the most effective approach is often to utilize their vertical growth potential to guide their facial development in the desired direction. Jamilian et al. applied a modified functional orthodontic appliance to induce sagittal and vertical changes in the mandible, achieving significant facial improvement for a patient with severe skeletal class II [ 8 ].

On the other hand, for adult patients lacking growth potential, active intrusion of posterior teeth is required to intervene vertically. Early on, high-pull headgear was the most common vertical control method, but this approach relied heavily on patient compliance, and it involved the application of intermittent force, making it relatively unreliable [ 9 , 10 , 11 ].

TADs’ emergence has greatly improved the convenience and efficiency of treatment [ 12 , 13 ]. Compared to headgear, TAD-assisted vertical control can provide more dental intrusion and counterclockwise rotation of the mandibular plane, which contributes to further improvement of profile [ 14 , 15 ]. Additionally, when active intrusion was applied, we typically utilize a sustained light force (approximately 50 g), which is more favorable for the remodeling of periodontal tissues compared to the intermittent heavy force exerted by headgear.

However, the mini-implants placed in the maxilla’s posterior region can also provide strong sagittal anchorage. Several studies have shown that maximum anchorage itself can achieve a better treatment outcome and improve the profile [ 16 , 17 , 18 ]. These findings have prompted the following questions: If sagittal retraction can already lead to sufficient facial aesthetics, is vertical tooth movement still necessary? To what extent can vertical movements benefit the facial profile?

Our research group has paid close attention to the efficacy of TAD-assisted vertical control in orthodontic camouflage treatments for patients with skeletal class II malocclusion. We have published several case reports and long-term follow-up studies showing that vertical control significantly improved the profiles of patients with skeletal class II malocclusion and a hyperdivergent facial type, achieving good long-term stability [ 19 , 20 , 21 , 22 , 23 ]. We believe that specifying how the active intrusion of upper dentition contributes to these craniofacial improvements will provide more information about the ability and limits of TAD-assisted vertical control and broaden the understanding of orthodontic camouflage treatment. Therefore, we included a control group whose TADs were used only to reinforce maxillary sagittal anchorage in order to exclude the influence of sagittal retraction.

With this retrospective case–control study, we aimed to quantify the effectiveness of TAD-assisted vertical control in the improvement of dentoalveolar malformation and soft tissue profiles in adult patients with a severe skeletal class II hyperdivergent pattern, and justified the necessity of active intrusion. We believe that this article provides specific references for orthodontists and general dentists concerning the camouflage treatment of patients with skeletal class II malocclusion.

This study was based on retrospective data obtained from orthodontic records at the Peking University School and Hospital of Stomatology, and it was approved by the institution’s biomedical ethics committee (approval number: PKUSSIRB-201630096, retrospectively registered). The patients included in this study accepted orthodontic treatment between 2006 and 2018.

The study’s sample selection was based on the following inclusion criteria: good-quality orthodontic records, the presence of permanent dentition, age > 18 years, a convex profile, skeletal class II (ANB > 5°), and a hyperdivergent skeletal pattern (FMA > 28°) [ 24 ]. The exclusion criteria included the following: dental anomalies in size, number, shape, or structure; permanent tooth loss; orthodontic–orthognathic combined surgery treatment; and Botox injection or prosthesis implantation before or during orthodontic treatment.

Treatment protocols

All the participants underwent systematic periodontal and endodontic assessments and therapies before orthodontic interventions. A straight-wire MBT technique was utilized after the extraction of four premolars from all patients. Braces and archwires were obtained from TP Orthodontics (La Porte, IN, USA). The alignment and leveling phases involved initial bracket-bonding followed by a certain procedure utilizing 0.014 in. NiTi, 0.016 in. NiTi, 0.016 in. × 0.022 in. NiTi, and 0.019 in. × 0.025 in. NiTi archwires sequentially. During the space-closing phase, a 0.019 × 0.025 in. stainless steel archwire was applied using a conventional sliding mechanism. This phase was terminated upon the complete closure of the premolar spaces. The patients’ dentition was finely adjusted before debonding. Miniscrews (diameter: 1.5 mm; length: 7 mm; Zhongbang Medical Treatment Appliance, Xi’an, China) were surgically inserted into the alveolar ridge.

The patients were divided into two groups: (1) the maxillary maximum anchorage (MMA) group, in which TADs were implanted only at the bilateral buccal side of the alveolar bone, between the roots of the upper premolar and the upper first molar or between the upper first molar and the upper second molar; and (2) the maxillary maximum anchorage with vertical control (MMA + VC) group, in which TADs were implanted into the bilateral buccal and lingual sides of the alveolar bone, between the roots of the upper first molar and the upper second molar, to intrude the upper molars with or without the TADs implanted in the anterior segment for incisor intrusion (Fig.  1 ).

figure 1

Representative image of intraoral devices. A . TAD-assisted intrusion of the upper anterior teeth. B . Buccal view of the posterior intrusion devices. C . Palatal view of the posterior intrusion devices

Sample size calculation

In this study, the effect size of the primary outcome was expected to be 2.32. This number was the difference in mandibular counterclockwise rotation (the decrease in the FMA value) between the two groups calculated in our preliminary study. The sample size was calculated using online software ( http://hedwig.mgh.harvard.edu/sample_size/ ) by assuming 5% type I errors and 20% type II errors. The sample calculation indicated that at least 10 patients were needed in each group.

In total, 36 patients were selected for the current study. The MMA group comprised 17 patients (14 females, 3 males) with a mean age of 24.18 ± 3.83 years and a mean treatment duration of 34.4 ± 12.8 months. The MMA + VC group consisted of 19 patients (16 females, 3 males) aged 25.00 ± 4.99 years, whose mean treatment duration was 34.7 ± 6.8 months. No significant difference in the patients’ gender, age, or treatment duration was observed between the groups (Additional Table  1 ).

Cephalometric analysis

Pre-treatment and post-treatment lateral cephalograms were collected, digitized, and superimposed using the Dolphin 11.0 software (Dolphin Imaging, Chatsworth, CA). An investigator who was not informed about the study’s groups obtained the measurements, which a second blinded investigator checked for accuracy. Any disagreements between these investigators were resolved through a weighted reevaluation until they were both satisfied. The variables used in the cephalometric analysis included skeletal, dental, and soft-tissue-related measurements. In total, 29 such variables were used (8 skeletal, 12 dental, and 9 soft-tissue-related). Figure  2 depicts the landmarks and important variables used in this study, while Additional Table  2 provides definitions.

figure 2

Tracing of a pretreatment cephalometric radiograph

Dental cast analysis

Pre-treatment and post-treatment dental casts were scanned using a 3Shape scanner (3Shape D, Kopenhagen, Dänemark) and measured in a double-blinded manner by a trained orthodontist using the Geomagic 13.0 software (Geomagic Qualify, Durham, NC, US). As Fig.  3 shows, the superimposition of the dental casts was based on the palate’s stable structure. A coordinate system was built, based on the definition of the anatomical occlusal plane and the midline of the palate. The tooth movements were analyzed in two dimensions, anterior or posterior (X) and intrusion or extrusion (Z). Additionally, posterior and extrusive movement was defined as positive.

figure 3

Superimposition of the dental casts. A . The pre-treatment maxillary model. B . The post-treatment maxillary model. C . Superimposition based on the stable structure of the palate. D . Transfer of corresponding landmarks

To evaluate the method’s error, 10 post-treatment lateral cephalograms and digital casts were randomly selected and remeasured by the same examiners two weeks after the first measurement was obtained. The intraclass correlation coefficient (ICC) was used to assess intra-examiner reliability and the reproducibility of all linear and angular measurements.

Statistical analysis

The intraclass correlation efficient (ICC) was evaluated using a two-way random model. Descriptive statistics for the dental casts and radiographic measurements were calculated for both the first and second measurements. Comparisons were performed and correlations were identified using Student’s t test in accordance with the results of Shapiro–Wilk normality tests. The pre-treatment skeletal, dental, and soft-tissue-related variables were compared between the groups using independent-sample t tests. The same variables were also compared from pre-treatment to post-treatment using paired t tests. The differences in treatment changes (concerning both the lateral cephalograms and the dental casts) between the MMA and MMA + VC groups were evaluated using independent-sample t tests. Multiple linear regression analysis was used to test the correlation between the independent variables of craniofacial structures and the dependent variable, the Z angle. Both groups’ differences in treatment changes were normalized to the mean variance. Then, a backward method was used to screen the independent variables. The entry probability of F was 0.05, and the removal criterion was 0.1. The statistical tests were performed with SPSS 18.0 software (IBM Corp., Armonk, NY). The results were considered statistically significant at P  < 0.05.

The groups were similar in age at the beginning of the orthodontic treatment (Additional Table  1 ). ICC was calculated with good reproducibility of the measurements (0.810–0.997), as Additional Table  3 shows.

The two groups showed similar mandibular retrognathia and hyperdivergent skeletal patterns. However, differences were observed in several variables, such as the Z angle, ANB, and L1-NB (mm). These differences indicated that the patients in the MMA + VC group had a more convex profile and more severe malocclusion (Table  1 ).

TAD-assisted vertical control better improved patients’ profiles

TADs’ efficacy in improving therapeutic outcomes is certain. However, whether and to what extent TAD-assisted vertical control can help patients with skeletal class II achieve better results from camouflage orthodontic treatments compared to the simple reinforcement of the maxillary anchorage is unclear.

For most of the patients whose results we recorded, a convex profile was the main complaint. Therefore, we first analyzed the improvements in soft-tissue-related variables for both groups (Tables  2 and 3 ). We discovered a similar trend of lip retraction (the UL-SnV angle and distance and the LL-SnV distance) and soft tissue relaxation (UL thickness and LL thickness). However, the change in the Z angle and Chin thickness showed that patients in the MMA + VC group experienced more improvement in their profiles and mentalis relaxation. (Figures  4 and 5 show the representative cases of the two groups, respectively.) Through these results, we have shown that TAD-assisted vertical control further improved the patients’ profiles, but how this advantage was achieved remained unclear.

figure 4

A representative case from the MMA group. The upper anterior incisors were restored using a ceramic veneer

figure 5

A representative case from the MMA + VC group

TAD-assisted vertical control contributed to maxillary retraction and mandibular counterclockwise rotation

Remarkable decreases in SNA and ANB were discovered in both groups. Furthermore, the decrease in ANB in the MMA + VC group was significantly greater compared to MMA group, showing effective maxillary retraction, which partly explained the dramatic change in the soft tissue (Tables  4 and 5 ).

Additionally, no significant differences in the mandibular plane angle in the MMA group pre- and post-treatment were observed. Indeed, the lower facial height (ANS-Me) even increased slightly. Meanwhile, the MP-SN and FMA values significantly decreased in the MMA + VC group, suggesting that TAD-assisted vertical control effectively achieved mandibular counterclockwise rotation. The decrease in the mandibular plane angle showed a significant difference in the MP-SN and FMA values between the MMA and MMA + VC groups (Table  5 ). An emphatic change was also observed in the improvement of PFH/AFH, indicating an improvement in the hyperdivergent facial type. Thus, the application of TAD-assisted vertical control achieved a certain extent of mandibular counterclockwise rotation, which also helped improve patients’ profiles (Fig.  6 ).

figure 6

Schematic graph of TAD-assisted vertical control during orthodontic camouflage treatment for patients with skeletal class II and a hyperdivergent facial type

TADs achieved substantial vertical control via the intrusion of maxillary dentition

Despite the gratifying sagittal retraction of the incisors in both groups (Table  6 ), the study’s cephalometric analysis showed significant intrusion of the upper molar on the P-P plane (U6-PP) in the MMA + VC group but not in the MMA group. Similarly, the upper incisor showed more intrusion (U1-PP) in the MMA + VC group, though no significance was observed (Table  7 ). These results were confirmed via dental cast superimposition (Table  8 ). Compared to the MMA group, the MMA + VC group experienced significant intrusion of the upper dentition. However, our cephalometric analysis also revealed a significantly lower molar extrusion (L6-MP) on the mandibular plane in both groups during orthodontic treatment. Thus, the tooth movement in the vertical dimension manifested the intrusion of the upper dentition for the MMA + VC group and the extrusion of the lower molars for both groups.

Multivariate regression analysis revealed the key factors of profile improvement

Since the changes occurred at the same time, assessing which factors played the most important role in altering the patients’ soft tissue profile was difficult. Therefore, we selected the Z angle—one of the most representative and remarkably changed profile indicators—as the dependent variable for our analysis, and we conducted multiple linear regression of the standardized bone, tooth, and soft tissue measurements.

Considering the interference of collinearity, we selected the following representative indicators: ANB, MP-SN, PFH/AFH, U1/SN, IMPA, U1-PP, U6-PP, L1-MP, L6-MP, Pog-NB, UL thickness, LL thickness, and Chin thickness.

The results showed that Y  = 0.000576 − 0.416 a  − 0.340 b  + 0.403 c (where Y denotes the Z angle and a , b , and c represent the MP-SN, U1-SN, and Pog-NB, respectively; Table  9 ). This finding indicated that the change in the Z angle was negatively correlated with the MP-SN and U1-SN variables and positively correlated with Pog-NB.

Thus, the gratifying profile improvement of patients with skeletal class II and the hyperdivergent facial type relied on the massive retraction of the upper incisors, the shape of the chin, and the mandibular plane’s counterclockwise rotation.

The efficacy of TAD-assisted vertical control

In this retrospective study, we endeavored to quantify the efficacy of TAD-facilitated vertical control in managing maxillary dental intrusion and consequent mandibular counterclockwise rotation. Subsequently, we elucidated their pivotal roles in enhancing soft tissue profiles according to the baseline of MMA group.

Evaluation of hard tissue showed that following en-masse retraction with mini-implants anchorage, the MMA group exhibited slight upper molar intrusion (U6: -0.86 ± 0.89 mm) and mandibular counterclockwise rotation (MP-SN: -0.16 ± 1.05°). This result is consistent with the randomized controlled trial conducted by Al-Sibale et al. [ 25 ] and the controlled clinical trial conducted by Chen et al. [ 14 ], suggesting that TADs in the maxillary alveolar can provide some vertical force even during sagittal retraction, necessitating attention to the direction of traction and the vertical position of the anterior teeth to avoid deepening of the overbite. Following active maxillary dental intrusion, the MMA + VC group exhibited greater upper molar intrusion (U6: -2.25 ± 1.03 mm) and mandibular counterclockwise rotation (MP-SN: -1.82 ± 1.38°), which is slightly lower than that reported by Ding et al. [ 15 ] and Deguchi et al. [ 26 ] This difference may be attributed to differences in inclusion criteria. In Ding’s study, the inclusion criteria were shallow overbite, while in Deguchi’s study were open bite. In contrast, our study included many patients with normal or even deep overbite. To achieve a favorable overbite after treatment, we conducted intrusion of not only molars but also anterior teeth (U1: -1.30 ± 1.61 mm; U3: -1.81 ± 1.28 mm) with the help of TADs in the anterior segment, which represented a more challenging improvement compared to the aforementioned studies.

In terms of soft tissue evaluation, many previous studies have discussed the main factors contributing to changes in various soft tissue landmarks. For instance, Maetevorakul et al. found that the improvement in incisor angle was most crucial for enhancing lower lip prominence, and the mandibular plane angle as well as different treatment modalities had significant effects on changes of soft tissue chin prominence [ 27 ]. Regarding the overall assessment of soft tissue profiles, Zhao et al. demonstrated that the Z angle had the best discriminative ability for female adults with Angle Class II Division 1 malocclusion [ 28 ]. Therefore, in this study, we stressed on the Z angle and found that the MMA + VC group showed a more significant improvement compared to the MMA group (15.25 ± 5.30° in the MMA + VC group; 10.54 ± 5.11° in the MMA group, P  = 0.011), which correlates with the poorer profiles before treatment in the MMA + VC group. To better identify which patients require active dental intrusion, we conducted a multiple linear regression analysis and found that this improvement was most closely associated with the retraction of the upper anterior teeth, prominence of the pogonion, and counterclockwise rotation of the mandibular plane. Therefore, we can conclude that vertical control is more necessary for patients with limited space for retraction or poor chin morphology.

Limitations and prospects of TAD-assisted vertical control

Although the occlusal plane’s counterclockwise rotation is considered an effective method to reduce the angle of the mandibular plane [ 29 ], in the current study, we observed a trend of clockwise rotation. However, this unexpected result is consistent with the findings of many similar studies in this field [ 12 , 13 ]. We speculate that this rotation results from the pendulum effect of the upper anterior teeth. Compared with the molars, the upper incisors have less intrusion, suggesting that we must pay particular attention to controlling the occlusal plane.

Additionally, despite TADs’ advantages of simplicity, flexibility, and independence from patient cooperation, they remain an invasive treatment [ 30 , 31 ]. In the current study, however, six miniscrews were needed to achieve effective vertical control. This approach does not apply to patients with improper bone conditions, and it also increases the difficulty of operation. Therefore, we hope to develop further methods that are more convenient and minimally invasive. The use of midpalatal miniscrews and personalized palatal bars may be an alternative option [ 12 ]; however, such an approach would still pose challenges in terms of operation and hygiene maintenance. Accordingly, we hope to further reduce orthodontic devices’ complexity in order to meet the requirements of comfortable treatment.

Methodologically, the current study’s evaluation of muscle response and profile changes was limited to a cephalometric analysis. Since soft tissue yields inaccurate measurements during lateral cephalograms, 3D facial scanning and electromyography could allow a more precise examination of patients’ aesthetic and functional changes. We plan to enhance the refinement of assessment modalities for both soft and hard tissues, endeavoring to substantiate vertical control’s efficacy and constraints through various methodologies, including randomized controlled trials.

The conclusions of this retrospective study are as follows.

TAD-reinforced maxillary anchorage with vertical control achieves intrusion of approximately 2 mm for the upper first molars.

TAD-reinforced maxillary anchorage with vertical control induces mandibular counterclockwise rotation of approximately 1.8° and improves patients’ hyperdivergent skeletal pattern.

When the upper incisors are not sufficiently retracted or the chin shape is not satisfying, active vertical control should be applied to help patients achieve better profiles.

Taken together, these conclusions demonstrate that TAD-assisted vertical control is essential for patients with skeletal class II and a hyperdivergent facial type. This approach constitutes a good alternative to improving occlusion and profiles via orthodontic camouflage treatment.

Data availability

The data sets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Maxillary maximum anchorage

Temporary anchorage devices

  • Vertical control

Tucker MR. Orthognathic surgery versus orthodontic camouflage in the treatment of mandibular deficiency. J Oral Maxillofac Surg. 1995;53(5):572–8.

Article   CAS   PubMed   Google Scholar  

Baherimoghaddam T, Oshagh M, Naseri N, Nasrbadi NI, Torkan S. Changes in cephalometric variables after orthognathic surgery and their relationship to patients’ quality of life and satisfaction. J Oral Maxillofac Res. 2014;5(4):e6.

Article   PubMed   PubMed Central   Google Scholar  

Thomas PM. Orthodontic camouflage versus orthognathic surgery in the treatment of mandibular deficiency. J Oral Maxillofac Surg. 1995;53(5):579–87.

Raposo R, Peleteiro B, Paco M, Pinho T. Orthodontic camouflage versus orthodontic-orthognathic surgical treatment in class II malocclusion: a systematic review and meta-analysis. Int J Oral Maxillofac Surg. 2018;47(4):445–55.

Freitas MR, Lima DV, Freitas KM, Janson G, Henriques JF. Cephalometric evaluation of class II malocclusion treatment with cervical headgear and mandibular fixed appliances. Eur J Orthod. 2008;30(5):477–82.

Jung MH. Vertical control of a class II deep bite malocclusion with the use of orthodontic mini-implants. Am J Orthod Dentofac Orthop. 2019;155(2):264–75.

Article   Google Scholar  

Peng J, Lei Y, Liu Y, Zhang B, Chen J. Effectiveness of micro-implant in vertical control during orthodontic extraction treatment in class II adults and adolescents after pubertal growth peak: a systematic review and meta-analysis. Clin Oral Investig. 2023;27(5):2149–62.

Article   PubMed   Google Scholar  

Jamilian A, Showkatbakhsh R, Rad AT. A novel approach for treatment of mandibular deficiency with vertical growth pattern. Int J Orthod Milwaukee. 2012;23(2):23–7.

PubMed   Google Scholar  

Lione R, Franchi L, Laganà G, Cozza P. Effects of cervical headgear and pendulum appliance on vertical dimension in growing subjects: a retrospective controlled clinical trial. Eur J Orthod. 2015;37(3):338–44.

Sambataro S, Rossi O, Bocchieri S, Fastuca R, Oppermann N, Levrini L, et al. Comparison of cephalometric changes in Class II growing patients with increased vertical dimension after high-pull and cervical headgear treatment. Eur J Paediatr Dent. 2023;24(1):36–41.

CAS   PubMed   Google Scholar  

Ulger G, Arun T, Sayinsu K, Isik F. The role of cervical headgear and lower utility arch in the control of the vertical dimension. Am J Orthod Dentofac Orthop. 2006;130(4):492–501.

Lee J, Miyazawa K, Tabuchi M, Kawaguchi M, Shibata M, Goto S. Midpalatal miniscrews and high-pull headgear for anteroposterior and vertical anchorage control: cephalometric comparisons of treatment changes. Am J Orthod Dentofac Orthop. 2013;144(2):238–50.

Yao CC, Lai EH, Chang JZ, Chen I, Chen YJ. Comparison of treatment outcomes between skeletal anchorage and extraoral anchorage in adults with maxillary dentoalveolar protrusion. Am J Orthod Dentofac Orthop. 2008;134(5):615–24.

Chen M, Li ZM, Liu X, Cai B, Wang DW, Feng ZC. Differences of treatment outcomes between self-ligating brackets with microimplant and headgear anchorages in adults with bimaxillary protrusion. Am J Orthod Dentofac Orthop. 2015;147(4):465–71.

Ding S, Liu M, Zou T. Comparative study on vertical effect between miniscrew and face-bow in orthodontic treatment of hyperdivergent class II protrusion. J Oral Sci Res. 2019;35(4):351–4.

Google Scholar  

Khlef HN, Hajeer MY, Ajaj MA, Heshmeh O. Evaluation of treatment outcomes of En masse Retraction with Temporary skeletal Anchorage devices in comparison with two-step retraction with Conventional Anchorage in patients with Dentoalveolar Protrusion: a systematic review and Meta-analysis. Contemp Clin Dent. 2018;9(4):513–23.

Kokodynski RA, Marshall SD, Ayer W, Weintraub NH, Hoffman DL. Profile changes associated with maxillary incisor retraction in the postadolescent orthodontic patient. Int J Adult Orthodon Orthognath Surg. 1997;12(2):129–34.

Kuroda S, Yamada K, Deguchi T, Kyung HM, Takano-Yamamoto T. Class II malocclusion treated with miniscrew anchorage: comparison with traditional orthodontic mechanics outcomes. Am J Orthod Dentofac Orthop. 2009;135(3):302–9.

Wang XD, Zhang JN, Liu DW, Lei FF, Liu WT, Song Y, et al. Nonsurgical correction using miniscrew-assisted vertical control of a severe high angle with mandibular retrusion and gummy smile in an adult. Am J Orthod Dentofac Orthop. 2017;151(5):978–88.

Deng JR, Li YA, Wang XD, Li J, Ding Y, Zhou YH. Evaluation of Long-Term Stability of Vertical Control in Hyperdivergent patients treated with Temporary Anchorage devices. Curr Med Sci. 2018;38(5):914–9.

Wang XD, Zhang JN, Liu DW, Lei FF, Zhou YH. Nonsurgical correction of a severe anterior deep overbite accompanied by a gummy smile and posterior scissor bite using a miniscrew-assisted straight-wire technique in an adult high-angle case. Korean J Orthod. 2016;46(4):253–65.

Wang XD, Lei FF, Liu DW, Zhang JN, Liu WT, Song Y, et al. Miniscrew-assisted customized lingual appliances for predictable treatment of skeletal class II malocclusion with severe deep overbite and overjet. Am J Orthod Dentofac Orthop. 2017;152(1):104–15.

Wang Y, Zhou Y, Zhang J, Wang X. Long-term stability of counterclockwise mandibular rotation by miniscrew-assisted maxillary intrusion in adult patients with skeletal class II high-angle malocclusion: a 10-year follow-up of 2 patients. AJO-DO Clin Companion. 2022;2(6):601–17.

Haralabakis NB, Sifakakis IB. The effect of cervical headgear on patients with high or low mandibular plane angles and the myth of posterior mandibular rotation. Am J Orthod Dentofac Orthop. 2004;126(3):310–7.

Al-Sibaie S, Hajeer MY. Assessment of changes following en-masse retraction with mini-implants anchorage compared to two-step retraction with conventional anchorage in patients with class II division 1 malocclusion: a randomized controlled trial. Eur J Orthod. 2014;36(3):275–83.

Deguchi T, Kurosaka H, Oikawa H, Kuroda S, Takahashi I, Yamashiro T, et al. Comparison of orthodontic treatment outcomes in adults with skeletal open bite between conventional edgewise treatment and implant-anchored orthodontics. Am J Orthod Dentofac Orthop. 2011;139(4 Suppl):S60–8.

Maetevorakul S, Viteporn S. Factors influencing soft tissue profile changes following orthodontic treatment in patients with Class II Division 1 malocclusion. Prog Orthod. 2016;17:13.

Zhao Z. The study on sensitivity of Asethetic Index from six kinds of Profile Soft tissue analysis methods on female adult in Liaoning Province. Dalian, China: Dalian Medical University; 2016.

Li X, Zhao Q, Zhao R, Gao M, Gao X, Lai W. Effect of occlusal plane control procedure on hyoid bone position and pharyngeal airway of hyperdivergent skeletal class II patients. Angle Orthod. 2017;87(2):293–9.

Papadopoulos MA, Papageorgiou SN, Zogakis IP. Clinical effectiveness of orthodontic miniscrew implants: a meta-analysis. J Dent Res. 2011;90(8):969–76.

Yamaguchi M, Inami T, Ito K, Kasai K, Tanimoto Y. Mini-implants in the anchorage armamentarium: new paradigms in the orthodontics. Int J Biomater. 2012;2012:394121.

Download references

Acknowledgements

We thank LetPub ( www.letpub.com ) and Scribendi ( www.scribendi.com ) for its linguistic assistance during the preparation of this manuscript.

This project is supported by the National Program for Multidisciplinary Cooperative Treatment on Major Diseases No. PKUSSNMP-202013; China Oral Health Foundation No. A2021-021; Beijing Municipal Science & Technology Commission No. Z171100001017128; National Natural Science Foundation of China No. 81901053, No.81900984 No. 82101043 and No. 8237030822; National High Level Hospital Clinical Research Funding No. 2023-NHLHCRF-YXHZ-TJMS-05; Elite Medical Professionals Project of China-Japan Friendship Hospital No.ZRJY2023-QM05; Beijing Municipal Natural Science Foundation No. 7242282.

Author information

Yan-Ning Guo and Sheng-Jie Cui contributed equally to this work.

Authors and Affiliations

Department of Orthodontics, National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Peking University School and Hospital of Stomatology, No.22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, China

Yan-Ning Guo, Sheng-Jie Cui, Jie-Ni Zhang, Yan-Heng Zhou & Xue-Dong Wang

Dental Medical Center, China-Japan Friendship Hospital, Beijing, 100029, China

Yan-Ning Guo

Department of Orthodontics, the School of Stomatology, The Key Laboratory of Stomatology, Hebei Medical University, Shijiazhuang, 050017, China

Fourth Division Department, National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Peking University School and Hospital of Stomatology, Beijing, 100081, China

You can also search for this author in PubMed   Google Scholar

Contributions

Y-N G and S-J C: Investigation, Visualization, Writing - Original draft preparation, contributed equally to this work and joint first authors. Y L: Methodology, Investigation, Data Curation, and critically revised the manuscript. Y F: Investigation, Data Curation, and critically revised the manuscript. J-N Z, Y-H Z: Supervision and critically revised the manuscript. X-D W: Conceptualization, Validation, Writing- Reviewing and Editing, Corresponding author. All authors reviewed the manuscript.

Corresponding author

Correspondence to Xue-Dong Wang .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Biomedical Ethics Committee of Peking University School and Hospital of Stomatology. (PKUSSIRB-201630096, retrospectively registered) All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from all subjects and/or their legal guardian(s).

Consent for publication

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yan-Ning Guo, Sheng-Jie Cui joint first authors.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Guo, YN., Cui, SJ., Liu, Y. et al. Quantitative evaluation of vertical control in orthodontic camouflage treatment for skeletal class II with hyperdivergent facial type. Head Face Med 20 , 31 (2024). https://doi.org/10.1186/s13005-024-00432-2

Download citation

Received : 07 November 2023

Accepted : 29 April 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s13005-024-00432-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Skeletal class II
  • Profile improvement

Head & Face Medicine

ISSN: 1746-160X

critical case study evaluation

COMMENTS

  1. How to Critically Evaluate Case Studies in Social Work

    The main concern in evaluating a case study is to accurately assess its quality and ultimately to offer clients social work interventions informed by the best available evidence. To assess the ...

  2. Case study

    The GAO (Government Accountability Office) has described six different types of case study: 1. Illustrative: This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy. (These are often used to complement quantitative data by providing examples of the overall findings).

  3. How to Critically Evaluate Case Studies in Social Work

    The main concern in evaluating a case study is to ac... Skip to main content ... Bogo, M., & George, U. ( 2003). Critical issues in cross-cultural counseling research: Case example of an ongoing project. ... R.K. ( 1992). The case study method as a tool for doing evaluation . Current Sociology, 40, 121-137. Google Scholar. Yin, R.K. ( 1994). ...

  4. Designing process evaluations using case study to explore the context

    We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [21, 37-41]. Purpose The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome.

  5. Critical case sampling

    The first critical case is to evaluate the regulations in a community of well-educated citizens. If they can't understand the regulations, then less-educated folks are sure to find the regulations incomprehensible. Or, conversely, one might consider the critical case to be a community consisting of people with quite low levels of education: 'If ...

  6. PDF Using Case Studies to do Program Evaluation

    Using Case Studies. to doProgram. Evaluation. valuation of any kind is designed to document what happened in a program. Evaluation should show: 1) what actually occurred, 2) whether it had an impact, expected or unexpected, and 3) what links exist between a program and its observed impacts.

  7. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  8. Designing process evaluations using case study to explore the context

    A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. ... We now discuss these critical components in ...

  9. JBI Critical Appraisal Tools

    JBI's Evidence Synthesis Critical Appraisal Tools Assist in Assessing the Trustworthiness, Relevance and Results of Published Papers ... Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, Stephenson M, Aromataris E. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence ...

  10. Case study research for better evaluations of complex interventions

    Background The need for better methods for evaluation in health research has been widely recognised. The 'complexity turn' has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might ...

  11. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  12. 15.7 Evaluation: Presentation and Analysis of Case Study

    Evaluate the effectiveness and quality of a case study report. Case studies follow a structure of background and context, methods, findings, and analysis. Body paragraphs should have main points and concrete details. In addition, case studies are written in formal language with precise wording and with a specific purpose and audience (generally ...

  13. Appraising psychotherapy case studies in practice-based evidence

    Systematic case studies are often placed at the low end of evidence-based practice (EBP) due to lack of critical appraisal. This paper seeks to attend to this research gap by introducing a novel Case Study Evaluation-tool (CaSE). First, issues around knowledge generation and validity are assessed in both EBP and practice-based evidence (PBE) paradigms. Although systematic case studies are more ...

  14. Guide to case studies

    A case study is an in depth focussed study of a person, group, or situation that has been studied over time within its real-life context. There are different types of case study: Illustrative case studies describe an unfamiliar situation in order to help people understand it. Critical instance case studies focus on a unique case, without a ...

  15. Case Study Evaluation Approach

    A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups. An individual, a location, or a project may serve as the focal point of a case study's ...

  16. Case study evaluations

    Resource link. Case study evaluations - World Bank. This guide, written by Linda G. Morra and Amy C. Friedlander for the World Bank, provides guidance and advice on the use of case studies. The paper attempts to clarify what is and is not a case study, what is case study methodology, how they can be used, and how they should be written up for ...

  17. Is it a case study?—A critical analysis and guidance☆

    The term "case study" is not used consistently when describing studies and, most importantly, is not used according to the established definitions. Given the misuse of the term "case study", we critically analyse articles that cite case study guidelines and report case studies. We find that only about 50% of the studies labelled "case ...

  18. PDF Case Study Evaluations

    Case studies are appropriate for determining the effects of programs or projects and reasons for success or failure. OED does most impact evaluation case studies for this purpose. The method is often used in combination with others, such as sample surveys, and there is a mix of qualitative and quantitative data.

  19. Developing a survey to measure nursing students' knowledge, attitudes

    The final survey consists of 45 items including 4 case studies. Systematic evaluation of knowledge-to-date coupled with stakeholder perspectives supports robust survey design. ... Knowing what nursing students think about MAiD is a critical first step. Therefore, the purpose of this study was to develop a survey to measure nursing students ...

  20. Securing Critical Infrastructure with Blockchain Technology: An ...

    Currently, in the digital era, critical infrastructure is increasingly exposed to cyber threats to their operation and security. This study explores the use of blockchain technology to address these challenges, highlighting its immutability, decentralization, and transparency as keys to strengthening the resilience of these vital structures. Through a methodology encompassing literature review ...

  21. Guidance for the design of qualitative case study evaluation

    This guide, written by Professor Frank Vanclay of the Department of Cultural Geography, University of Groningen, provides notes on planning and implementing qualitative case study research.It outlines the use of a variety of different evaluation options that can be used in outcomes assessment and provides examples of the use of story based approaches with a discussion focused on their ...

  22. Realistic Evaluation and Case Studies: Stretching the Potential

    Embedded in the critical realist tradition, this article aims to explore the potentialities of the case study for evaluation purposes when complexity and specificity are moderate.Three objectives are pursued. First, it is stated that the focus of the evaluation effort is not necessarily the programme itself, but can be one premise upon which ...

  23. Evaluation of online job portals for HR recruitment ...

    It is critical to preserve and improve economic competitiveness by properly selecting and developing these resources. ... To study the role of performance evaluation of the best online job portal for recruitment at two wheeler company for the improvement of HR recruitment process; ... The case study of Two Wheeler Automobile Company, Chennai ...

  24. Soanes, K., Rytwinski, T., Fahrig, L., Huijser, M.P., Jaeger, J.A.G

    In an analysis of 313 studies, only 14% evaluated whether wildlife crossing structures resulted in a change in animal movement across roads. We identified critical problems in existing studies, especially the lack of benchmarks (e.g., pre-road, pre-mitigation or control data) and the use of biased comparisons.4.

  25. Quantitative evaluation of vertical control in orthodontic camouflage

    Background In this study, we sought to quantify the influence of vertical control assisted by a temporary anchorage device (TAD) on orthodontic treatment efficacy for skeletal class II patients with a hyperdivergent facial type and probe into the critical factors of profile improvement. Methods A total of 36 adult patients with skeletal class II and a hyperdivergent facial type were included ...

  26. PDF Case Study Evaluations GAO/PEMD-91-10.1

    A good case study identifies the Page 116 GAO/PEMD-91-10.1.9 Case Study Evaluations. Appendix III Guidelines for Reviewing Case Study Reports. elements of the issue that was examined and presents the initial arguments in favor of the various resolutions and the findings of the study that support these resolutions. 3.

  27. Performance-Based Wind Design of Tall Buildings Considering Corner

    In this paper, inelastic wind design and aerodynamic modification, in the forms of corner chamfering and recession, were applied to three case study buildings as two practical remedies for reduction of wind loads. Case 1 has sharp corners, Case 2 has 10% chamfered corners, and Case 3 has 10% recessed corners.