• Tools and Resources
  • Customer Services
  • Original Language Spotlight
  • Alternative and Non-formal Education 
  • Cognition, Emotion, and Learning
  • Curriculum and Pedagogy
  • Education and Society
  • Education, Change, and Development
  • Education, Cultures, and Ethnicities
  • Education, Gender, and Sexualities
  • Education, Health, and Social Services
  • Educational Administration and Leadership
  • Educational History
  • Educational Politics and Policy
  • Educational Purposes and Ideals
  • Educational Systems
  • Educational Theories and Philosophies
  • Globalization, Economics, and Education
  • Languages and Literacies
  • Professional Learning and Development
  • Research and Assessment Methods
  • Technology and Education
  • Share This Facebook LinkedIn Twitter

Article contents

Comparative case study research.

  • Lesley Bartlett Lesley Bartlett University of Wisconsin–Madison
  •  and  Frances Vavrus Frances Vavrus University of Minnesota
  • https://doi.org/10.1093/acrefore/9780190264093.013.343
  • Published online: 26 March 2019

Case studies in the field of education often eschew comparison. However, when scholars forego comparison, they are missing an important opportunity to bolster case studies’ theoretical generalizability. Scholars must examine how disparate epistemologies lead to distinct kinds of qualitative research and different notions of comparison. Expanded notions of comparison include not only the usual logic of contrast or juxtaposition but also a logic of tracing, in order to embrace approaches to comparison that are coherent with critical, constructivist, and interpretive qualitative traditions. Finally, comparative case study researchers consider three axes of comparison : the vertical, which pays attention across levels or scales, from the local through the regional, state, federal, and global; the horizontal, which examines how similar phenomena or policies unfold in distinct locations that are socially produced; and the transversal, which compares over time.

  • comparative case studies
  • case study research
  • comparative case study approach
  • epistemology

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Oxford Research Encyclopedias, Education. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 21 May 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|185.80.150.64]
  • 185.80.150.64

Character limit 500 /500

Experts@Minnesota Logo

Rethinking case study research: A comparative approach

  • Organizational Leadership, Policy and Development

Research output : Book/Report › Book

Comparative case studies are an effective qualitative tool for researching the impact of policy and practice in various fields of social research, including education. Developed in response to the inadequacy of traditional case study approaches, comparative case studies are highly effective because of their ability to synthesize information across time and space. In Rethinking Case Study Research: A Comparative Approach, the authors describe, explain, and illustrate the horizontal, vertical, and transversal axes of comparative case studies in order to help readers develop their own comparative case study research designs. In six concise chapters, two experts employ geographically distinct case studies-from Tanzania to Guatemala to the U.S.-to show how this innovative approach applies to the operation of policy and practice across multiple social fields. With examples and activities from anthropology, development studies, and policy studies, this volume is written for researchers, especially graduate students, in the fields of education and the interpretive social sciences.

Bibliographical note

This output contributes to the following UN Sustainable Development Goals (SDGs)

Publisher link

  • 10.4324/9781315674889

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • Comparative Case Study Keyphrases 100%
  • Case Study Research Keyphrases 100%
  • Case Study Social Sciences 100%
  • Research Design Economics, Econometrics and Finance 100%
  • Anthropology Keyphrases 25%
  • Tanzania Keyphrases 25%
  • Innovative Approaches Keyphrases 25%
  • Guatemala Keyphrases 25%

T1 - Rethinking case study research

T2 - A comparative approach

AU - Bartlett, Lesley

AU - Vavrus, Frances

N1 - Publisher Copyright: © 2017 Taylor & Francis. All rights reserved.

PY - 2016/11/10

Y1 - 2016/11/10

N2 - Comparative case studies are an effective qualitative tool for researching the impact of policy and practice in various fields of social research, including education. Developed in response to the inadequacy of traditional case study approaches, comparative case studies are highly effective because of their ability to synthesize information across time and space. In Rethinking Case Study Research: A Comparative Approach, the authors describe, explain, and illustrate the horizontal, vertical, and transversal axes of comparative case studies in order to help readers develop their own comparative case study research designs. In six concise chapters, two experts employ geographically distinct case studies-from Tanzania to Guatemala to the U.S.-to show how this innovative approach applies to the operation of policy and practice across multiple social fields. With examples and activities from anthropology, development studies, and policy studies, this volume is written for researchers, especially graduate students, in the fields of education and the interpretive social sciences.

AB - Comparative case studies are an effective qualitative tool for researching the impact of policy and practice in various fields of social research, including education. Developed in response to the inadequacy of traditional case study approaches, comparative case studies are highly effective because of their ability to synthesize information across time and space. In Rethinking Case Study Research: A Comparative Approach, the authors describe, explain, and illustrate the horizontal, vertical, and transversal axes of comparative case studies in order to help readers develop their own comparative case study research designs. In six concise chapters, two experts employ geographically distinct case studies-from Tanzania to Guatemala to the U.S.-to show how this innovative approach applies to the operation of policy and practice across multiple social fields. With examples and activities from anthropology, development studies, and policy studies, this volume is written for researchers, especially graduate students, in the fields of education and the interpretive social sciences.

UR - http://www.scopus.com/inward/record.url?scp=85021003907&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85021003907&partnerID=8YFLogxK

U2 - 10.4324/9781315674889

DO - 10.4324/9781315674889

AN - SCOPUS:85021003907

SN - 9781138939516

BT - Rethinking case study research

PB - Taylor and Francis Inc.

This website may not work correctly because your browser is out of date. Please update your browser .

Qualitative comparative analysis

Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest.

QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account for all the observed outcomes, as well as their absence.

The results are typically expressed in statements expressed in ordinary language or as Boolean algebra. For example:

  • A combination of Condition A and condition B or a combination of condition C and condition D will lead to outcome E.
  • In Boolean notation this is expressed more succinctly as A*B + C*D→E

QCA results are able to distinguish various complex forms of causation, including:

  • Configurations of causal conditions, not just single causes. In the example above, there are two different causal configurations, each made up of two conditions.
  • Equifinality, where there is more than one way in which an outcome can happen. In the above example, each additional configuration represents a different causal pathway
  • Causal conditions which are necessary, sufficient, both or neither, plus more complex combinations (known as INUS causes – insufficient but necessary parts of a configuration that is unnecessary but sufficient), which tend to be more common in everyday life. In the example above, no one condition was sufficient or necessary. But each condition is an INUS type cause
  • Asymmetric causes – where the causes of failure may not simply be the absence of the cause of success. In the example above, the configuration associated with the absence of E might have been one like this: A*B*X + C*D*X →e  Here X condition was a sufficient and necessary blocking condition.
  • The relative influence of different individual conditions and causal configurations in a set of cases being examined. In the example above, the first configuration may have been associated with 10 cases where the outcome was E, whereas the second might have been associated with only 5 cases.  Configurations can be evaluated in terms of coverage (the percentage of cases they explain) and consistency (the extent to which a configuration is always associated with a given outcome).

QCA is able to use relatively small and simple data sets. There is no requirement to have enough cases to achieve statistical significance, although ideally there should be enough cases to potentially exhibit all the possible configurations. The latter depends on the number of conditions present. In a 2012 survey of QCA uses the median number of cases was 22 and the median number of conditions was 6.  For each case, the presence or absence of a condition is recorded using nominal data i.e. a 1 or 0. More sophisticated forms of QCA allow the use of “fuzzy sets” i.e. where a condition may be partly present or partly absent, represented by a value of 0.8 or 0.2 for example. Or there may be more than one kind of presence, represented by values of 0, 1, 2 or more for example. Data for a QCA analysis is collated in a simple matrix form, where rows = cases and columns = conditions, with the rightmost column listing the associated outcome for each case, also described in binary form.

QCA is a theory-driven approach, in that the choice of conditions being examined needs to be driven by a prior theory about what matters. The list of conditions may also be revised in the light of the results of the QCA analysis if some configurations are still shown as being associated with a mixture of outcomes. The coding of the presence/absence of a condition also requires an explicit view of that condition and when and where it can be considered present. Dichotomisation of quantitative measures about the incidence of a condition also needs to be carried out with an explicit rationale, and not on an arbitrary basis.

Although QCA was originally developed by Charles Ragin some decades ago it is only in the last decade that its use has become more common amongst evaluators. Articles on its use have appeared in Evaluation and the American Journal of Evaluation.

For a worked example, see Charles Ragin’s What is Qualitative Comparative Analysis (QCA)? ,  slides 6 to 15 on The bare-bones basics of crisp-set QCA.

[A crude summary of the example is presented here]

In his presentation Ragin provides data on 65 countries and their reactions to austerity measures imposed by the IMF. This has been condensed into a Truth Table (shown below), which shows all possible configurations of four different conditions that were thought to affect countries’ responses: the presence or absence of severe austerity, prior mobilisation, corrupt government, rapid price rises. Next to each configuration is data on the outcome associated with that configuration – the numbers of countries experiencing mass protest or not. There are 16 configurations in all, one per row. The rightmost column describes the consistency of each configuration: whether all cases with that configuration have one type of outcome, or a mixed outcome (i.e. some protests and some no protests). Notice that there are also some configurations with no known cases.

comparative case studies qualitative research

Ragin’s next step is to improve the consistency of the configurations with mixed consistency. This is done either by rejecting cases within an inconsistent configuration because they are outliers (with exceptional circumstances unlikely to be repeated elsewhere) or by introducing an additional condition (column) that distinguishes between those configurations which did lead to protest and those which did not. In this example, a new condition was introduced that removed the inconsistency, which was described as  “not having a repressive regime”.

The next step involves reducing the number of configurations needed to explain all the outcomes, known as minimisation. Because this is a time-consuming process, this is done by an automated algorithm (aka a computer program) This algorithm takes two configurations at a time and examines if they have the same outcome. If so, and if their configurations are only different in respect to one condition this is deemed to not be an important causal factor and the two configurations are collapsed into one. This process of comparisons is continued, looking at all configurations, including newly collapsed ones, until no further reductions are possible.

[Jumping a few more specific steps] The final result from the minimisation of the above truth table is this configuration:

SA*(PR + PM*GC*NR)

The expression indicates that IMF protest erupts when severe austerity (SA) is combined with either (1) rapid price increases (PR) or (2) the combination of prior mobilization (PM), government corruption (GC), and non-repressive regime (NR).

This slide show from Charles C Ragin, provides a detailed explanation, including examples, that clearly demonstrates the question, 'What is QCA?'

This book, by Schneider and Wagemann, provides a comprehensive overview of the basic principles of set theory to model causality and applications of Qualitative Comparative Analysis (QCA), the most developed form of set-theoretic method, for research ac

This article by Nicolas Legewie provides an introduction to Qualitative Comparative Analysis (QCA). It discusses the method's main principles and advantages, including its concepts.

COMPASSS (Comparative methods for systematic cross-case analysis) is a website that has been designed to develop the use of systematic comparative case analysis  as a research strategy by bringing together scholars and practitioners who share its use as

This paper from Patrick A. Mello focuses on reviewing current applications for use in Qualitative Comparative Analysis (QCA) in order to take stock of what is available and highlight best practice in this area.

Marshall, G. (1998). Qualitative comparative analysis. In A Dictionary of Sociology Retrieved from https://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/qualitative-comparative-analysis

Expand to view all resources related to 'Qualitative comparative analysis'

  • An introduction to applied data analysis with qualitative comparative analysis
  • Qualitative comparative analysis: A valuable approach to add to the evaluator’s ‘toolbox’? Lessons from recent applications

'Qualitative comparative analysis' is referenced in:

  • 52 weeks of BetterEvaluation: Week 34 Generalisations from case studies?
  • Week 18: is there a "right" approach to establishing causation in advocacy evaluation?

Framework/Guide

  • Rainbow Framework :  Check the results are consistent with causal contribution
  • Data mining

Back to top

© 2022 BetterEvaluation. All right reserved.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Transl Behav Med
  • v.4(2); 2014 Jun

Logo of transbehavmed

Using qualitative comparative analysis to understand and quantify translation and implementation

Heather kane.

RTI International, 3040 Cornwallis Road, Research Triangle Park, P.O. Box 12194, Durham, NC 27709 USA

Megan A Lewis

Pamela a williams, leila c kahwati.

Understanding the factors that facilitate implementation of behavioral medicine programs into practice can advance translational science. Often, translation or implementation studies use case study methods with small sample sizes. Methodological approaches that systematize findings from these types of studies are needed to improve rigor and advance the field. Qualitative comparative analysis (QCA) is a method and analytical approach that can advance implementation science. QCA offers an approach for rigorously conducting translational and implementation research limited by a small number of cases. We describe the methodological and analytic approach for using QCA and provide examples of its use in the health and health services literature. QCA brings together qualitative or quantitative data derived from cases to identify necessary and sufficient conditions for an outcome. QCA offers advantages for researchers interested in analyzing complex programs and for practitioners interested in developing programs that achieve successful health outcomes.

INTRODUCTION

In this paper, we describe the methodological features and advantages of using qualitative comparative analysis (QCA). QCA is sometimes called a “mixed method.” It refers to both a specific research approach and an analytic technique that is distinct from and offers several advantages over traditional qualitative and quantitative methods [ 1 – 4 ]. It can be used to (1) analyze small to medium numbers of cases (e.g., 10 to 50) when traditional statistical methods are not possible, (2) examine complex combinations of explanatory factors associated with translation or implementation “success,” and (3) combine qualitative and quantitative data using a unified and systematic analytic approach.

This method may be especially pertinent for behavioral medicine given the growing interest in implementation science [ 5 ]. Translating behavioral medicine research and interventions into useful practice and policy requires an understanding of the implementation context. Understanding the context under which interventions work and how different ways of implementing an intervention lead to successful outcomes are required for “T3” (i.e., dissemination and implementation of evidence-based interventions) and “T4” translations (i.e., policy development to encourage evidence-based intervention use among various stakeholders) [ 6 , 7 ].

Case studies are a common way to assess different program implementation approaches and to examine complex systems (e.g., health care delivery systems, interventions in community settings) [ 8 ]. However, multiple case studies often have small, naturally limited samples or populations; small samples and populations lack adequate power to support conventional, statistical analyses. Case studies also may use mixed-method approaches, but typically when researchers collect quantitative and qualitative data in tandem, they rarely integrate both types of data systematically in the analysis. QCA offers solutions for the challenges posed by case studies and provides a useful analytic tool for translating research into policy recommendations. Using QCA methods could aid behavioral medicine researchers who seek to translate research from randomized controlled trials into practice settings to understand implementation. In this paper, we describe the conceptual basis of QCA, its application in the health and health services literature, and its features and limitations.

CONCEPTUAL BASIS OF QCA

QCA has its foundations in historical, comparative social science. Researchers in this field developed QCA because probabilistic methods failed to capture the complexity of social phenomena and required large sample sizes [ 1 ]. Recently, this method has made inroads into health research and evaluation [ 9 – 13 ] because of several useful features as follows: (1) it models equifinality , which is the ability to identify more than one causal pathway to an outcome (or absence of the outcome); (2) it identifies conjunctural causation , which means that single conditions may not display their effects on their own, but only in conjunction with other conditions; and (3) it implies asymmetrical relationships between causal conditions and outcomes, which means that causal pathways for achieving the outcome differ from causal pathways for failing to achieve the outcome.

QCA is a case-oriented approach that examines relationships between conditions (similar to explanatory variables in regression models) and an outcome using set theory; a branch of mathematics or of symbolic logic that deals with the nature and relations of sets. A set-theoretic approach to modeling causality differs from probabilistic methods, which examines the independent, additive influence of variables on an outcome. Regression models, based on underlying assumptions about sampling and distribution of the data, ask “what factor, holding all other factors constant at each factor’s average, will increase (or decrease) the likelihood of an outcome .” QCA, an approach based on the examination of set, subset, and superset relationships, asks “ what conditions —alone or in combination with other conditions—are necessary or sufficient to produce an outcome .” For additional QCA definitions, see Ragin [ 4 ].

Necessary conditions are those that exhibit a superset relationship with the outcome set and are conditions or combinations of conditions that must be present for an outcome to occur. In assessing necessity, a researcher “identifies conditions shared by cases with the same outcome” [ 4 ] (p. 20). Figure  1 shows a hypothetical example. In this figure, condition X is a necessary condition for an effective intervention because all cases with condition X are also members of the set of cases with the outcome present; however, condition X is not sufficient for an effective intervention because it is possible to be a member of the set of cases with condition X, but not be a member of the outcome set [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 13142_2014_251_Fig1_HTML.jpg

Necessary and sufficient conditions and set-theoretic relationships

Sufficient conditions exhibit subset relationships with an outcome set and demonstrate that “the cause in question produces the outcome in question” [ 3 ] (p. 92). Figure  1 shows the multiple and different combinations of conditions that produce the hypothetical outcome, “effective intervention,” (1) by having condition A present, (2) by having condition D present, or (3) by having the combination of conditions B and C present. None of these conditions is necessary and any one of these conditions or combinations of conditions is sufficient for the outcome of an effective intervention.

QCA AS AN APPROACH AND AS AN ANALYTIC TECHNIQUE

The term “QCA” is sometimes used to refer to the comparative research approach but also refers to the “analytic moment” during which Boolean algebra and set theory logic is applied to truth tables constructed from data derived from included cases. Figure  2 characterizes this distinction. Although this figure depicts steps as sequential, like many research endeavors, these steps are somewhat iterative, with respecification and reanalysis occurring along the way to final findings. We describe each of the essential steps of QCA as an approach and analytic technique and provide examples of how it has been used in health-related research.

An external file that holds a picture, illustration, etc.
Object name is 13142_2014_251_Fig2_HTML.jpg

QCA as an approach and as an analytic technique

Operationalizing the research question

Like other types of studies, the first step involves identifying the research question(s) and developing a conceptual model. This step guides the study as a whole and also informs case, condition (c.f., variable), and outcome selection. As mentioned above, QCA frames research questions differently than traditional quantitative or qualitative methods. Research questions appropriate for a QCA approach would seek to identify the necessary and sufficient conditions required to achieve the outcome. Thus, formulating a QCA research question emphasizes what program components or features—individually or in combination—need to be in place for a program or intervention to have a chance at being effective (i.e., necessary conditions) and what program components or features—individually or in combination—would produce the outcome (i.e., sufficient conditions). For example, a set theoretic hypothesis would be as follows: If a program is supported by strong organizational capacity and a comprehensive planning process, then the program will be successful. A hypothesis better addressed by probabilistic methods would be as follows: Organizational capacity, holding all other factors constant, increases the likelihood that a program will be successful.

For example, Longest and Thoits [ 15 ] drew on an extant stress process model to assess whether the pathways leading to psychological distress differed for women and men. Using QCA was appropriate for their study because the stress process model “suggests that particular patterns of predictors experienced in tandem may have unique relationships with health outcomes” (p. 4, italics added). They theorized that predictors would exhibit effects in combination because some aspects of the stress process model would buffer the risk of distress (e.g., social support) while others simultaneously would increase the risk (e.g., negative life events).

Identify cases

The number of cases in a QCA analysis may be determined by the population (e.g., 10 intervention sites, 30 grantees). When particular cases can be chosen from a larger population, Berg-Schlosser and De Meur [ 16 ] offer other strategies and best practices for choosing cases. Unless the number of cases relies on an existing population (i.e., 30 programs or grantees), the outcome of interest and existing theory drive case selection, unlike variable-oriented research [ 3 , 4 ] in which numbers are driven by statistical power considerations and depend on variation in the dependent variable. For use in causal inference, both cases that exhibit and do not exhibit the outcome should be included [ 16 ]. If a researcher is interested in developing typologies or concept formation, he or she may wish to examine similar cases that exhibit differences on the outcome or to explore cases that exhibit the same outcome [ 14 , 16 ].

For example, Kahwati et al. [ 9 ] examined the structure, policies, and processes that might lead to an effective clinical weight management program in a large national integrated health care system, as measured by mean weight loss among patients treated at the facility. To examine pathways that lead to both better and poorer facility-level weight loss, 11 facilities from among those with the largest weight loss outcomes and 11 facilities from among those with the smallest were included. By choosing cases based on specific outcomes, Kahwati et al. could identify multiple patterns of success (or failure) that explain the outcome rather than the variability associated with the outcome.

Identify conditions and outcome sets

Selecting conditions relies on the research question, conceptual model, and number of cases similar to other research methods. Conditions (or “sets” or “condition sets”) refer to the explanatory factors in a model; they are similar to variables. Because QCA research questions assess necessary and sufficient conditions, a researcher should consider which conditions in the conceptual model would theoretically produce the outcome individually or in combination. This helps to focus the analysis and number of conditions. Ideally, for a case study design with a small (e.g., 10–15) or intermediate (e.g., 16–100) number of cases, one should aim for fewer than five conditions because in QCA a researcher assesses all possible configurations of conditions. Adding conditions to the model increases the possible number of combinations exponentially (i.e., 2 k , where k = the number of conditions). For three conditions, eight possible combinations of the selected conditions exist as follows: the presence of A, B, C together, the lack of A with B and C present, the lack of A and lack of B with C present, and so forth. Having too many conditions will likely mean that no cases fall into a particular configuration, and that configuration cannot be assessed by empirical examples. When one or more configurations are not represented by the cases, this is known as limited diversity, and QCA experts suggest multiple strategies for managing such situations [ 4 , 14 ].

For example, Ford et al. [ 10 ] studied health departments’ implementation of core public health functions and organizational factors (e.g., resource availability, adaptability) and how those conditions lead to superior and inferior population health changes. They operationalized three core public functions (i.e., assessment of environmental and population public health needs, capacity for policy development, and authority over assurance of healthcare operations) and operationalized those for their study by using composite measures of varied health indicators compiled in a UnitedHealth Group report. In this examination of 41 state health departments, the authors found that all three core public health functions were necessary for population health improvement. The absence of any of the core public health functions was sufficient for poorer population health outcomes; thus, only the health departments with the ability to perform all three core functions had improved outcomes. Additionally, these three core functions in combination with either resource availability or adaptability were sufficient combinations (i.e., causal pathways) for improved population health outcomes.

Calibrate condition and outcome sets

Calibration refers to “adjusting (measures) so that they match or conform to dependably known standards” and is a common way of standardizing data in the physical sciences [ 4 ] (p. 72). Calibration requires the researcher to make sense of variation in the data and apply expert knowledge about what aspects of the variation are meaningful. Because calibration depends on defining conditions based on those “dependably known standards,” QCA relies on expert substantive knowledge, theory, or criteria external to the data themselves [ 14 ]. This may require researchers to collaborate closely with program implementers.

In QCA, one can use “crisp” set or “fuzzy” set calibration. Crisp sets, which are similar to dichotomous categorical variables in regression, establish decision rules defining a case as fully in the set (i.e., condition) or fully out of the set; fuzzy sets establish degrees of membership in a set. Fuzzy sets “differentiate between different levels of belonging anchored by two extreme membership scores at 1 and 0” [ 14 ] (p.28). They can be continuous (0, 0.1, 0.2,..) or have qualitatively defined anchor points (e.g., 0 is fully out of the set; 0.33 is more out than in the set; 0.66 is more in than out of the set; 1 is fully in the set). A researcher selects fuzzy sets and the corresponding resolution (i.e., continuous, four cutoff points, six cutoff) based on theory and meaningful differences between cases and must be able to provide a verbal description for each cutoff point [ 14 ]. If, for example, a researcher cannot distinguish between 0.7 and 0.8 membership in a set, then a more continuous scoring of cases would not be useful, rather a four point cutoff may better characterize the data. Although crisp and fuzzy sets are more commonly used, new multivariate forms of QCA are emerging as are variants that incorporate elements of time [ 14 , 17 , 18 ].

Fuzzy sets have the advantage of maintaining more detail for data with continuous values. However, this strength also makes interpretation more difficult. When an observation is coded with fuzzy sets, a particular observation has some degree of membership in the set “condition A” and in the set “condition NOT A.” Thus, when doing analyses to identify sufficient conditions, a researcher must make a judgment call on what benchmark constitutes recommendation threshold for policy or programmatic action.

In creating decision rules for calibration, a researcher can use a variety of techniques to identify cutoff points or anchors. For qualitative conditions, a researcher can define decision rules by drawing from the literature and knowledge of the intervention context. For conditions with numeric values, a researcher can also employ statistical approaches. Ideally, when using statistical approaches, a researcher should establish thresholds using substantive knowledge about set membership (thus, translating variation into meaningful categories). Although measures of central tendency (e.g., cases with a value above the median are considered fully in the set) can be used to set cutoff points, some experts consider the sole use of this method to be flawed because case classification is determined by a case’s relative value in regard to other cases as opposed to its absolute value in reference to an external referent [ 14 ].

For example, in their study of National Cancer Institutes’ Community Clinical Oncology Program (NCI CCOP), Weiner et al. [ 19 ] had numeric data on their five study measures. They transformed their study measures by using their knowledge of the CCOP and by asking NCI officials to identify three values: full membership in a set, a point of maximum ambiguity, and nonmembership in the set. For their outcome set, high accrual in clinical trials, they established 100 patients enrolled accrual as fully in the set of high accrual, 70 as a point of ambiguity (neither in nor out of the set), and 50 and below as fully out of the set because “CCOPs must maintain a minimum of 50 patients to maintain CCOP funding” (p. 288). By using QCA and operationalizing condition sets in this way, they were able to answer what condition sets produce high accrual, not what factors predict more accrual. The advantage is that by using this approach and analytic technique, they were able to identify sets of factors that are linked with a very specific outcome of interest.

Obtain primary or secondary data

Data sources vary based on the study, availability of the data, and feasibility of data collection; data can be qualitative or quantitative, a feature useful for mixed-methods studies and systematically integrating these different types of data is a major strength of this approach. Qualitative data include program documents and descriptions, key informant interviews, and archival data (e.g., program documents, records, policies); quantitative data consists of surveys, surveillance or registry data, and electronic health records.

For instance, Schensul et al. [ 20 ] relied on in-depth interviews for their analysis; Chuang et al. [ 21 ] and Longest and Thoits [ 15 ] drew on survey data for theirs. Kahwati et al. [ 9 ] used a mixed-method approach combining data from key informant interviews, program documents, and electronic health records. Any type of data can be used to inform the calibration of conditions.

Assign set membership scores

Assigning set membership scores involves applying the decision rules that were established during the calibration phase. To accomplish this, the research team should then use the extracted data for each case, apply the decision rule for the condition, and discuss discrepancies in the data sources. In their study of factors that influence health care policy development in Florida, Harkreader and Imershein [ 22 ] coded contextual factors that supported state involvement in the health care market. Drawing on a review of archival data and using crisp set coding, they assigned a value of 1 for the presence of a contextual factor (e.g., presence of federal financial incentives promoting policy, unified health care provider policy position in opposition to state policy, state agency supporting policy position) and 0 for the absence of a contextual factor.

Construct truth table

After completing the coding, researchers create a “truth table” for analysis. A truth table lists all of the possible configurations of conditions, the number of cases that fall into that configuration, and the “consistency” of the cases. Consistency quantifies the extent to which cases that share similar conditions exhibit the same outcome; in crisp sets, the consistency value is the proportion of cases that exhibit the outcome. Fuzzy sets require a different calculation to establish consistency and are described at length in other sources [ 1 – 4 , 14 ]. Table  1 displays a hypothetical truth table for three conditions using crisp sets.

Sample of a hypothetical truth table for crisp sets

1 fully in the set, 0 fully out of the set

QCA AS AN ANALYTIC TECHNIQUE

The research steps to this point fall into QCA as an approach to understanding social and health phenomena. Analysis of the truth table is the sine qua non of QCA as an analytic technique. In this section, we provide an overview of the analysis process, but analytic techniques and emerging forms of analysis are described in multiple texts [ 3 , 4 , 14 , 17 ]. The use of computer software to conduct truth table analysis is recommended and several software options are available including Stata, fsQCA, Tosmana, and R.

A truth table analysis first involves the researcher assessing which (if any) conditions are individually necessary or sufficient for achieving the outcome, and then second, examining whether any configurations of conditions are necessary or sufficient. In instances where contradictions in outcomes from the same configuration pattern occur (i.e., one case from a configuration has the outcome; one does not), the researcher should also consider whether the model is properly specified and conditions are calibrated accurately. Thus, this stage of the analysis may reveal the need to review how conditions are defined and whether the definition should be recalibrated. Similar to qualitative and quantitative research approaches, analysis is iterative.

Additionally, the researcher examines the truth table to assess whether all logically possible configurations have empiric cases. As described above, when configurations lack cases, the problem of limited diversity occurs. Configurations without representative cases are known as logical remainders, and the researcher must consider how to deal with those. The analysis of logical remainders depends on the particular theory guiding the research and the research priorities. How a researcher manages the logical remainders has implications for the final solution, but none of the solutions based on the truth table will contradict the empirical evidence [ 14 ]. To generate the most conservative solution term, a researcher makes no assumptions about truth table rows with no cases (or very few cases in larger N studies) and excludes them from the logical minimization process. Alternately, a researcher can choose to include (or exclude) rows with no cases from analysis, which would generate a solution that is a superset of the conservative solution. Choosing inclusion criteria for logical remainders also depends on theory and what may be empirically possible. For example, in studying governments, it would be unlikely to have a case that is a democracy (“condition A”), but has a dictator (“condition B”). In that circumstance, the researcher may choose to exclude that theoretically implausible row from the logical minimization process.

Third, once all the solutions have been identified, the researcher mathematically reduces the solution [ 1 , 14 ]. For example, if the list of solutions contains two identical configurations, except that in one configuration A is absent and in the other A is present, then A can be dropped from those two solutions. Finally, the researcher computes two parameters of fit: coverage and consistency. Coverage determines the empirical relevance of a solution and quantifies the variation in causal pathways to an outcome [ 14 ]. When coverage of a causal pathway is high, the more common the solution is, and more of the outcome is accounted for by the pathway. However, maximum coverage may be less critical in implementation research because understanding all of the pathways to success may be as helpful as understanding the most common pathway. Consistency assesses whether the causal pathway produces the outcome regularly (“the degree to which the empirical data are in line with a postulated subset relation,” p. 324 [ 14 ]); a high consistency value (e.g., 1.00 or 100 %) would indicate that all cases in a causal pathway produced the outcome. A low consistency value would suggest that a particular pathway was not successful in producing the outcome on a regular basis, and thus, for translational purposes, should not be recommended for policy or practice changes. A causal pathway with high consistency and coverage values indicates a result useful for providing guidance; a high consistency with a lower coverage score also has value in showing a causal pathway that successfully produced the outcome, but did so less frequently.

For example, Kahwati et al. [ 9 ] examined their truth table and analyzed the data for single conditions and combinations of conditions that were necessary for higher or lower facility-level patient weight loss outcomes. The truth table analysis revealed two necessary conditions and four sufficient combinations of conditions. Because of significant challenges with logical remainders, they used a bottom-up approach to assess whether combinations of conditions yielded the outcome. This entailed pairing conditions to ensure parsimony and maximize coverage. With a smaller number of conditions, a researcher could hypothetically find that more cases share similar characteristics and could assess whether those cases exhibit the same outcome of interest.

At the completion of the truth table analysis, Kahwati et al. [ 9 ] used the qualitative data from site interviews to provide rich examples to illustrate the QCA solutions that were identified, which explained what the solutions meant in clinical practice for weight management. For example, having an involved champion (usually a physician), in combination with low facility accountability, was sufficient for program success (i.e., better weight loss outcomes) and was related to better facility weight loss. In reviewing the qualitative data, Kahwati et al. [ 9 ] discovered that involved champions integrate program activities into their clinical routines and discuss issues as they arise with other program staff. Because involved champions and other program staff communicated informally on a regular basis, formal accountability structures were less of a priority.

ADVANTAGES AND LIMITATIONS OF QCA

Because translational (and other health-related) researchers may be interested in which intervention features—alone or in combination—achieve distinct outcomes (e.g., achievement of program outcomes, reduction in health disparities), QCA is well suited for translational research. To assess combinations of variables in regression, a researcher relies on interaction effects, which, although useful, become difficult to interpret when three, four, or more variables are combined. Furthermore, in regression and other variable-oriented approaches, independent variables are held constant at the average across the study population to isolate the independent effect of that variable, but this masks how factors may interact with each other in ways that impact the ultimate outcomes. In translational research, context matters and QCA treats each case holistically, allowing each case to keep its own values for each condition.

Multiple case studies or studies with the organization as the unit of analysis often involve a small or intermediate number of cases. This hinders the use of standard statistical analyses; researchers are less likely to find statistical significance with small sample sizes. However, QCA draws on analyses of set relations to support small-N studies and to identify the conditions or combinations of conditions that are necessary or sufficient for an outcome of interest and may yield results when probabilistic methods cannot.

Finally, QCA is based on an asymmetric concept of causation , which means that the absence of a sufficient condition associated with an outcome does not necessarily describe the causal pathway to the absence of the outcome [ 14 ]. These characteristics can be helpful for translational researchers who are trying to study or implement complex interventions, where more than one way to implement a program might be effective and where studying both effective and ineffective implementation practices can yield useful information.

QCA has several limitations that researchers should consider before choosing it as a potential methodological approach. With small- and intermediate-N studies, QCA must be theory-driven and circumscribed by priority questions. That is, a researcher ideally should not use a “kitchen sink” approach to test every conceivable condition or combination of conditions because the number of combinations increases exponentially with the addition of another condition. With a small number of cases and too many conditions, the sample would not have enough cases to provide examples of all the possible configurations of conditions (i.e., limited diversity), or the analysis would be constrained to describing the characteristics of the cases, which would have less value than determining whether some conditions or some combination of conditions led to actual program success. However, if the number of conditions cannot be reduced, alternate QCA techniques, such as a bottom-up approach to QCA or two-step QCA, can be used [ 14 ].

Another limitation is that programs or clinical interventions involved in a cross-site analysis may have unique programs that do not seem comparable. Cases must share some degree of comparability to use QCA [ 16 ]. Researchers can manage this challenge by taking a broader view of the program(s) and comparing them on broader characteristics or concepts, such as high/low organizational capacity, established partnerships, and program planning, if these would provide meaningful conclusions. Taking this approach will require careful definition of each of these concepts within the context of a particular initiative. Definitions may also need to be revised as the data are gathered and calibration begins.

Finally, as mentioned above, crisp set calibration dichotomizes conditions of interest; this form of calibration means that in some cases, the finer grained differences and precision in a condition may be lost [ 3 ]. Crisp set calibration provides more easily interpretable and actionable results and is appropriate if researchers are primarily interested in the presence or absence of a particular program feature or organizational characteristic to understand translation or implementation.

QCA offers an additional methodological approach for researchers to conduct rigorous comparative analyses while drawing on the rich, detailed data collected as part of a case study. However, as Rihoux, Benoit, and Ragin [ 17 ] note, QCA is not a miracle method, nor a panacea for all studies that use case study methods. Furthermore, it may not always be the most suitable approach for certain types of translational and implementation research. We outlined the multiple steps needed to conduct a comprehensive QCA. QCA is a good approach for the examination of causal complexity, and equifinality could be helpful to behavioral medicine researchers who seek to translate evidence-based interventions in real-world settings. In reality, multiple program models can lead to success, and this method accommodates a more complex and varied understanding of these patterns and factors.

Implications

Practice : Identifying multiple successful intervention models (equifinality) can aid in selecting a practice model relevant to a context, and can facilitate implementation.

Policy : QCA can be used to develop actionable policy information for decision makers that accommodates contextual factors.

Research : Researchers can use QCA to understand causal complexity in translational or implementation research and to assess the relationships between policies, interventions, or procedures and successful outcomes.

The “qualitative” in qualitative comparative analysis (QCA): research moves, case-intimacy and face-to-face interviews

  • Open access
  • Published: 26 March 2022
  • Volume 57 , pages 489–507, ( 2023 )

Cite this article

You have full access to this open access article

comparative case studies qualitative research

  • Sofia Pagliarin   ORCID: orcid.org/0000-0003-4846-6072 3 , 4 ,
  • Salvatore La Mendola 2 &
  • Barbara Vis 1  

8818 Accesses

5 Citations

5 Altmetric

Explore all metrics

Qualitative Comparative Analysis (QCA) includes two main components: QCA “as a research approach” and QCA “as a method”. In this study, we focus on the former and, by means of the “interpretive spiral”, we critically look at the research process of QCA. We show how QCA as a research approach is composed of (1) an “analytical move”, where cases, conditions and outcome(s) are conceptualised in terms of sets, and (2) a “membership move”, where set membership values are qualitatively assigned by the researcher (i.e. calibration). Moreover, we show that QCA scholars have not sufficiently acknowledged the data generation process as a constituent research phase (or “move”) for the performance of QCA. This is particularly relevant when qualitative data–e.g. interviews, focus groups, documents–are used for subsequent analysis and calibration (i.e. analytical and membership moves). We call the qualitative data collection process “relational move” because, for data gathering, researchers establish the social relation “interview” with the study participants. By using examples from our own research, we show how a dialogical interviewing style can help researchers gain the in-depth knowledge necessary to meaningfully represent qualitative data into set membership values for QCA, hence improving our ability to account for the “qualitative” in QCA.

Similar content being viewed by others

comparative case studies qualitative research

The role of analytic direction in qualitative research

comparative case studies qualitative research

Adapting and blending grounded theory with case study: a practical guide

Working at a remove: continuous, collective, and configurative approaches to qualitative secondary analysis.

Avoid common mistakes on your manuscript.

1 Introduction

Qualitative Comparative Analysis (QCA) is a configurational comparative research approach and method for the social sciences based on set-theory. It was introduced in crisp-set form by Ragin ( 1987 ) and later expanded to fuzzy sets (Ragin 2000 ; 2008a ; Rihoux and Ragin 2009 ; Schneider and Wagemann 2012 ). QCA is a diversity-oriented approach extending “the single-case study to multiple cases with an eye toward configurations of similarities and differences” (Ragin 2000 :22). QCA aims at finding a balance between complexity and generalizability by identifying data patterns that can exhibit or approach set-theoretic connections (Ragin 2014 :88).

As a research approach, QCA researchers first conceptualise cases as elements belonging, in kind and/or degree, to a selection of conditions and outcome(s) that are conceived as sets. They then assign cases’ set membership values to conditions and outcome(s) (i.e. calibration). Populations are constructed for outcome-oriented investigations and causation is conceived to be conjunctural and heterogeneous (Ragin 2000 : 39ff). As a method, QCA is the systematic and formalised analysis of the calibrated dataset for cross-case comparison through Boolean algebra operations. Combinations of conditions (i.e. configurations) represent both the characterising features of cases and also the multiple paths towards the outcome (Byrne 2005 ).

Most of the critiques to QCA focus on the methodological aspects of “QCA as a method” (e.g. Lucas and Szatrowski 2014 ), although epistemological issues regarding deterministic causality and subjectivity in assigning set membership values are also discussed (e.g. Collier 2014 ). In response to these critiques, Ragin ( 2014 ; see also Ragin 2000 , ch. 11) emphasises the “mindset shift” needed to perform QCA: QCA “as a method” makes sense only if researchers admit “QCA as a research approach”, including its qualitative component.

The qualitative character of QCA emerges when recognising the relevance of case-based knowledge or “case intimacy”. The latter is key to perform calibration (see e.g. Ragin 2000 :53–61; Byrne 2005 ; Ragin 2008a ; Harvey 2009 ; Greckhamer et al. 2013 ; Gerrits and Verweij 2018 :36ff): when associating “meanings” to “numbers”, researchers engage in a “dialogue between ideas and evidence” by using set-membership values as “ interpretive tools ” (Ragin 2000 : 162, original emphasis). The foundations of QCA as a research approach are explicitly rooted in qualitative, case-oriented research approaches in the social sciences, in particular in the understanding of causation as multiple and configurational, in terms of combinations of conditions, and in the conceptualisation of populations as types of cases, which should be refined in the course of an investigation (Ragin 2000 : 30–42).

Arguably, QCA researchers should make ample use of qualitative methods for the social sciences, such as narrative or semi-structured interviews, focus groups, discourse and document analysis, because this will help gain case intimacy and enable the dialogue between theories and data. Furthermore, as many QCA-studies have a small to medium sample size (10–50 cases), qualitative data collection methods appear to be particularly appropriate to reach both goals. However, so far only around 30 published QCA studies use qualitative data (de Block and Vis 2018 ), out of which only a handful employ narrative interviews (see Sect.  2 ).

We argue that this puzzling observation about QCA empirical research is due to two main reasons. First, quantitative data, in particular secondary data available from official databases, are more malleable for calibration. Although QCA researchers should carefully distinguish between measurement and calibration (see e.g. Ragin, 2008a , b ; Schneider and Wagemann 2012 , Sect. 1.2), quantitative data are more convenient for establishing the three main qualitative anchors (i.e. the cross-over point as maximum ambiguity; the lower and upper thresholds for full set membership exclusion or inclusion). Quantitative data facilitate QCA researchers in performing QCA both as a research approach and method. QCA scholars are somewhat aware of this when discussing “the two QCAs” (large-n/quantitative data and small-n/more frequent use of qualitative data; Greckhamer et al. 2013 ; see also Thomann and Maggetti 2017 ).

Second, the use of qualitative data for performing QCA requires an additional effort from the part of the researcher, because data collected through, for instance, narrative interviews, focus groups and document analysis come in verbal form. Therefore, QCA researchers using qualitative methods for empirical research have to first collect data and only then move to their analysis and conceptualisation as sets (analytical move) and their calibration into “numbers” (membership move) for their subsequent handling through QCA procedures (QCA as a method).

Because of these two main reasons, we claim that data generation (or data construction) should also be recognised and integrated in the QCA research process. Fully accounting for QCA as a “qualitative” research approach necessarily entails questions about the data generation process, especially when qualitative research methods are used that come in verbal, and not numerical, form.

This study’s contributions are twofold. First, we present the “interpretative spiral” (see Fig.  1 ) or “cycle” (Sandelowski et al. 2009 ) where data gradually transit through changes of state: from meanings, to concepts to numerical values. In limiting our discussion to QCA as a research approach, we identified three main moves composing the interpretative spiral: the (1) relational (data generation through qualitative methods), (2) analytical (set conceptualisation) and (3) membership (calibration) moves. Second, we show how in-depth knowledge for subsequent set conceptualisation and calibration can be more effectively generated if the researcher is open, during data collection, to support the interviewee’s narration and to establish a dialogue—a relation—with him/her (i.e. the relational move). It is the researcher’s openness that can facilitate the development of case intimacy for set conceptualisation and assessment (analytical and membership moves). We hence introduce a “dialogical” interviewing style (La Mendola 2009 ) to show how this approach can be useful for QCA researchers. Although we mainly discuss narrative interviews, a dialogical interviewing style can also adapt to face-to-face semi-structured interviews or questionnaires.

figure 1

The interpretative spiral and the relational, analytical and membership moves

Our main aim is to make QCA researchers more aware of “minding their moves” in the interpretative spiral. Additionally, we show how a “dialogical” interviewing style can facilitate the access to the in-depth knowledge of cases useful for calibration. Researchers using narrative interviews who have not yet performed QCA can gain insight into–and potentially see the advantages of–how qualitative data, in particular narrative interviews, can be employed for the performance of QCA (see Gerrits and Verweij 2018 :36ff).

In Sect.  2 we present the interpretative spiral (Fig.  1 ,) the interconnections between the three moves and we discuss the limited use of qualitative data in QCA research. In Sect.  3 , we examine the use of qualitative data for performing QCA by discussing the relational move and a dialogical interviewing style. In Sect.  4 , we examine the analytical and membership moves and discuss how QCA researchers have so far dealt with them when using qualitative data. In Sect.  5 , we conclude by putting forward some final remarks.

2 The interpretative spiral and the three moves

Sandelowski et al. ( 2009 ) state that the conversion of qualitative data into quantitative data (“quantitizing”) necessarily involves “qualitazing”, because researchers perform a “continuous cycling between assigning numbers to meaning and meaning to numbers” (p. 213). “Data” are recognised as “the product of a move on the part of researchers” (p. 209, emphasis added) because information has to be conceptualised, understood and interpreted to become “data”. In Fig.  1 , we tailor this “cycling” to the performance of QCA by means of the interpretative spiral.

Through the interpretative spiral, we show both how knowledge for QCA is transformed into data by means of “moves” and how the gathering of qualitative data consists of a move on its own. Our choice for the term “move” is grounded in the need to communicate a sense of movement along the “cycling” between meanings and numbers. Furthermore, the term “move” resonates with the communicative steps that interviewers and interviewee engage in during an interview (see Sect.  3 below).

Although we present these moves as separate, they are in reality interfaces, because they are part of the same interpretative spiral. They can be thought of as moves in a dance; the latter emerges because of the succession of moves and steps as a whole, as we show below.

The analytical and membership moves are intertwined-as shown by the central “vortex” of the spiral in Fig.  1 -as they are composed of a number of interrelated steps, in particular case selection, theory-led set conceptualisation, definition of the most appropriate set membership scales and of the cross-over and upper and lower thresholds (e.g. crisp-set, 4- or 6-scale fuzzy-sets; see Ragin 2000 :166–171; Rihoux and Ragin 2009 ). Calibration is the last move of the dialogue between theory (concepts of the analytical move) and data (cases). In the membership move, fuzzy sets are used as “an interpretative algebra, a language that is half-verbal-conceptual and half-mathematical-analytical” (Ragin 2000 :4). Calibration is hence a type of “quantitizing” and “qualitizing” (Sandelowski et al. 2009 ). In applied QCA, set membership values can be reconceptualised and recalibrated. This will for instance be done to solve true logical contradictions in the truth table and when QCA results are interpreted by “going back to cases”, hence overlapping with the practices related to QCA “as a method”.

The relational move displayed in Fig.  1 expresses the additional interpretative process that researchers engage in when collecting and analysing qualitative data. De Block and Vis ( 2018 ) show that only around 30 published QCA-studies combine qualitative data with QCA, including a range of additional data, like observations, site visits, newspaper articles.

However, a closer look reveals that the majority of the published QCA-studies using qualitative data employ (semi)structured interviews or questionnaires. Footnote 1 For instance, Basurto and Speer ( 2012 ) Footnote 2 proposed a step-wise calibration process based on a frequency-oriented strategy (e.g. number of meetings, amount of available information) to calibrate the information collected through 99 semi-structured interviews. Fischer ( 2015 ) conducted 250 semi-structured interviews by cooperating with four trained researchers using pre-structured questions, where respondents could voluntarily add “qualitative pieces of information” in “an interview protocol” (p. 250). Henik ( 2015 ) structured and carried out 50 interviews on whistle-blowing episodes to ensure subsequent blind coding of a high number of items (almost 1000), arguably making them resemble face-to-face questionnaires.

In turn, only a few QCA-researchers use data from narrative interviews. Footnote 3 For example, Metelits ( 2009 ) conducted narrative interviews during ethnographic fieldwork over the course of several years. Verweij and Gerrits ( 2015 ) carried out 18 “open” interviews, while Chai and Schoon ( 2016 ) conducted “in-depth” interviews. Wang ( 2016 ), in turn, conducted structured interviews through a questionnaire, following a similar approach as in Fischer ( 2015 ); however, during the interviews, Wang’s respondents were asked to reflexively justify the chosen questionnaire's responses, hence moving the structured interviews closer to narrative ones. Tóth et al. ( 2017 ) performed 28 semi-structured interviews with company managers to evaluate the quality and attractiveness of customer-provider relationships for maintaining future business relations. Their empirical strategy was however grounded in initial focus groups and other semi-structured interviews, composed of open questions in the first part and a questionnaire in the second part (Tóth et al. 2015 ).

Although no interview is completely structured or unstructured, it is useful to conceptualise (semi-)structured and less structured (or narrative) interviews as the two ends of a continuum (Brinkmann 2014 ). Albeit still relatively rare as compared to quantitative data, the more popular integration of (semi-)structured interviews into QCA might be due to the advantages that this type of qualitative data holds for calibration. The “structured” portion of face-to-face semi-structured interviews or questionnaires facilitates the calibration of this type of qualitative data, because quantitative anchor points can be more clearly identified to assign set membership values (see e.g. Basurto and Speer 2012 ; Fischer 2015 ; Henik 2015 ).

Hence, when critically looking at the “qualitative” character of QCA as a research approach, applied research shows that qualitative methods uneasily fit with QCA. This is because data collection has not been recognised as an integral part of the QCA research process. In Sect.  3 , we show how qualitative data, and in particular a dialogical interviewing style, can help researchers to develop case intimacy.

3 The relational move

Social data are not self-evident facts, they do not reveal anything in themselves, but researchers must engage in interpretative efforts concerning their meaning (Sandelowski et al. 2009 ; Silverman, 2017 ). Differently stated, quantitising and qualitising characterise both quantitative and qualitative social data, albeit to different degrees (Sandelowski et al. 2009 ). This is an ontological understanding of reality that is diversely held by post-positivist, critical realist, critical and constructivist approaches (except by positivist scholars; see Guba and Lincoln, 2005 :193ff). Our position is more akin to critical realism that, in contrast to post-modernist perspectives (Spencer et al. 2014 :85ff), holds that reality exists “out there” and that epistemologically, our knowledge of it, although imperfect, is possible–for instance through the scientific method (Sayer 1992 ).

The socially constructed, not self-evident character of social data is manifest in the collection and analysis of qualitative data. Access to the field needs to be earned, as well as trust and consent from participants, to gradually build and expand a network of participants. More than “collected”, data are “gathered”, because they imply the cooperation with participants. Data from interviews and observations are heterogeneous, and need to be transcribed and analysed by researchers, who also self-reflectively experience the entire process of data collection. QCA researchers using qualitative data necessarily have to go through this additional research process–or move-to gather and generate data, before QCA as a research approach can even start. As QCA researchers using qualitative data need to interact with participants to collect their data, we call this additional research process “relational move”.

While we limit our discussion to narrative interviews and select a few references from a vast literature, our claim is that it is the ability of the interviewer to give life to interviews as a distinct type of social interaction that is key for the data collection process (Chase 2005 ; Leech 2002 ; La Mendola 2009 ; Brinkmann. 2014 ). The ability of the interviewer to establish a dialogue with the interviewee–also in the case of (semi-)structured interviews–is crucial to gain access to case-based knowledge and thus develop the case intimacy later needed in the analytical and membership moves. The relational move is about a researcher’s ability to handle the intrinsic duality characterising that specific social interaction we define as an interview. Both (or more) partners have to be considered as necessary actors involved in giving shape to the “inter-view” as an ex-change of views.

Qualitative researchers call this ability “rapport” (Leech, 2002 :665), “contract” or “staging” (Legard et al., 2003 :139). In our specific understanding of the relational move through a “dialogical” Footnote 4 interviewing style, during the interview 1) the interviewer and the interviewee become the “listener” and the “narrator” (Chase, 2005 :660) and 2) a true dialogue between listener and narrator can only take place when they engage in an “I-thou” interaction (Buber 1923 /2008), as we will show below when we discuss selected examples from our own research.

As a communicative style, in a dialogical interview not only the researcher cannot disappear behind the veil of objectivity (Spencer et al. 2014 ), but the researcher is also aware of the relational duality–or “dialogueness”–inherent to the “inter-view”. Dialogical face-to-face interviewing can be compared to a choreography (Brinkman 2014 :283; Silverman 2017 :153) or a dance (La Mendola 2009 , ch. 4 and 5) where one of the partners (the researcher) is the porteur (“supporter”) of the interaction. As in a dancing couple, the listener supports, but does not lead, the narrator in the unfolding of her story. The dialogical approach to interviewing is hence non-directive, but supportive. A key characteristic of dialogical interviews is a particular way of “being in the interview” (see example 2 below) because it requires the researcher to consider the interviewee as a true narrator (a “thou”). Footnote 5

In a dialogical approach to interviews, questions can be thought of as frames through which the listener invites the narrator to tell a story in her own terms (Chase 2005 :662). The narrator becomes the “subject of study” who can be disobedient and capable to raise her own questions (Latour 2000 : 116; see also Lund 2014). This is also compatible with a critical realist ontology and epistemology, which holds that researchers inevitably draw artificial (but negotiable) boundaries around the object and subject of analysis (Gerrits and Verweij 2013). The case-based, or data-driven (ib.), character of QCA as a research approach hence takes a new meaning: in a dialogical interviewing style, although the interviewer/listener proposes a focus of analysis and a frame of meaning, the interviewee/narrator is given the freedom to re-negotiate that frame of meaning (La Mendola 2009 ; see examples 1 and 2 below).

We argue that this is an appropriate way to obtain case intimacy and in-depth knowledge for subsequent QCA, because it is the narrator who proposes meanings that will then be translated by the researcher, in the following moves, into set membership values.

Particularly key for a dialogical interviewing style is the question formulation, where interviewer privileges “how” questions (Becker 1998 ). In this way, “what” and “why” (evaluative) questions are avoided, where the interviewee is asked to rationally explain a process with hindsight and that supposedly developed in a linear way. Also typifying questions are avoided, where the interviewer gathers general information (e.g. Can you tell me about the process through which an urban project is typically built? Can you tell me about your typical day as an academic?). Footnote 6 “Dialogical” questions can start with: “I would like to propose you to tell me about…” and are akin to “grand tour questions” (Spradley 1979 ; Leech 2002 ) or questions posed “obliquely” (Roulston 2018 ) because they aim at collecting stories, episodes in a certain situation or context and allowing the interviewee to be relatively free to answer the questions.

An example taken from our own research on a QCA of large-scale urban transformations in Western Europe illustrates the distinct approach characterising dialogical interviewing. One of our aims was to reconstruct the decision-making process concerning why and how a certain urban transformation took place (Pagliarin et al. 2019 ). QCA has already been previously used to study urban development and spatial policies because it is sensitive to individual cases, while also accounting for cross-case patterns by means of causal complexity (configurations of conditions), equifinality and causal asymmetry (e.g. Byrne 2005 ; Verweij and Gerrits 2015 ; Gerrits and Verweij 2018 ). A conventional way to formulate this question would be: “In your opinion, why did this urban transformation occur at this specific time?” or “Which were the governance actors that decided its implementation?”. Instead, we formulated the question in a narrative and dialogical way:

Example 1 Listener [L]: Can you tell me how the site identification and materialization of Ørestad came about? Narrator [N]: Yes. I mean there’s always a long background for these projects. (…) it’s an urban area built on partly reclaimed land. It was, until the second world war, a seaport and then they reclaimed it during the second world war, a big area. (…) this is the island called Amager. In the western part here, you can see it differs completely from the rest and that’s because they placed a dam all around like this, so it’s below sea level. (…) [L]: When you say “they”, it’s…? [N]: The municipality of Copenhagen. Footnote 7 (…)

In this example, the posed question (“how… [it]… came about?”) is open and oriented toward collecting the specific story of the narrator about “how” the Ørestad project emerged (Becker 1998 ), starting at the specific time point and angle decided by the interviewee. In this example, the interviewee decided to start just after the Second World War (albeit the focus of the research was only from the 1990s) and described the area’s geographical characteristics as a background for the subsequent decision-making processes. It is then up to the researcher to support the narrator in funnelling in the topics and themes of interest for the research. In the above example, the listener asked: “When you say “they”, it’s…?” to signal to the narrator to be more specific about “they”, without however assuming to know the answer (“it’s…?”). In this way, the narrator is supported to expand on the role of Copenhagen municipality without directly asking for it (which is nevertheless always a possibility to be seized by the interviewer).

The specific “dialogical” way of the researcher of “being in the interview” is rooted in the epistemological awareness of the discrepancy between the narrator’s representation and the listener’s. During an interview, there are a number of “representation loops”. As discussed in the interpretative spiral (see Sect.  2 ), the analytical and membership moves are characterised by a number of research steps; similarly, in the relational move the researcher engages in representation loops or interpretative steps when interacting with the interviewee. The researcher holds ( a ) an analytical representation of her focus of analysis, ( b ) which will be re-interpreted by the interviewee (Geertz, 1973 ). In a dialogical style of interview, the researcher also embraces ( c ) her representation of the ( b ) interviewee's interpretation of ( a ) her theory-led representation of the focus of analysis. Taken together, ( a )-( b )-( c ) are the structuring steps of a dialogical interview, where the listener’s and narrator’s representations “dance” with one another. In the relational move, the interviewer is aware of the steps from one representation to another.

In the following Example 2 , the narrator re-elaborated (interpretative step b) the frame of meaning of the listener (interpretative step a) by emphasising to the listener two development stages of a certain project (an airport expansion in Barcelona, Spain), which the researcher did not previously think of (interpretative step c):

Example 2 [L]: Could you tell me about how the project identification and realisation of the Barcelona airport come about? [N]: Of the Barcelona airport? Well. The Barcelona airport is I think a good thermometer of something deeper, which has been the inclusion of Barcelona and of its economy in the global economy. So, in the last 30 years El Prat airport has lived through like two impulses of development, because it lived, let´s say, the necessary adaptation to a specific event, that is the Olympic games. There it lived its first expansion, to what we today call Terminal 2. So, at the end of the ´80 and early ´90, El Prat airport experienced its first big jump. (...) Later, in 2009 (...) we did a more important expansion, because we did not expand the original terminal, but we did a new, bigger one, (...) the one we now call Terminal 1. Footnote 8

If the interviewee is considered as a “thou”, and if the researcher is aware of the representation loops (see above), the collected information can also be helpful for constructing the study population in QCA. The population under analysis is oftentimes not given in advance but gradually defined through the process of casing (Ragin 2000 ). This allows the researcher to be open to construct the study population “with the help of others”, like “informants, people in the area, the interlocutors” (Lund 2014:227). For instance, in example 2 above, the selection of which urban transformations will form the dataset can depend on the importance given by the interviewees to the structuring impact of a certain urban transformation on the overall urban structure of an urban region.

In synthesis, the data collection process is a move on its own in the research process for performing QCA. Especially when the collected data are qualitative, the researcher engages in a relation with the interlocutor to gather information. A dialogical approach emphasises that the quality of the gathered data depends on the quality of the dialogue between narrator and listener (La Mendola 2009 ). When the listener is open to consider the interviewee as a “thou”, and when she is aware of the interpretative steps occurring in the interview, then meaningful case-based knowledge can be accessed.

Case intimacy is at best developed when the researcher is open to integrate her focus of analysis with fieldwork information and when s/he invites, like in a dance, the narrator to tell his story. However, a dialogical interviewing style is not theory-free, but it is “theory-independent”: the dialogical interviewer supports the narration of the interviewee and does not lead the narrator by imposing her own conceptualisations. We argue that such dialogical I-thou interaction during interviews fosters in-depth knowledge of cases, because the narrator is treated as a subject that can propose his interpretation of the focus of analysis before the researcher frames it within her analytical and membership moves.

However, in practice, there is a tension between the researcher's need to collect data and the “here-and-now interactional event of the interview” (Rapley, 2001 :310). It is inevitable that the researcher re-elaborates, to a certain degree, her  analytical framework during the interviews, because this enables the researcher to get acquainted with the object of analysis and to keep the interview content on target with the research goals (Jopke and Gerrits, 2019 ). But is it this re-interpretation of the interviewee's replies and stories by the listener during the interviews that opens the interviewer’s awareness of the representation loops.

4 The analytical and membership moves

Researchers engage in face-to-face interviews as a strategy for data collection by holding specific analytical frameworks and theories. A researcher seldom begins his or her undertakings, even in the exploratory phase, with a completely open mind (Lund 2014:231). This means that the researcher's representations (a and c, see above) of the narrator's representation(s) (b, see above) are related to the theory-led frames of inquiry that the researcher organises to understand the world. These frames are typically also verbal, as “[t]his framing establishes, and is established through, the language we employ to speak about our concerns” (Lund 2014:226).

In particular for the collection of qualitative data, the analytical move is composed of two main movements: during and after the data collection process. During the data collection process, when adopting a dialogical interviewing style, the researcher should mind keeping the interview theory-independent (see above). First, this means that the interviewee is not asked to get to the researcher’s analytical level. The use of jargon should be avoided, either in narrative or semi-structured interviews and questionnaires, because it would limit the narrator's representation(s) (b) within the listener's interpretative frames (a), and hence the chance for the researcher to gain in-depth case knowledge (c). Silverman ( 2017 :154) cautions against “flooding” interviewees with “social science categories, assumptions and research agendas”. Footnote 9 In example 1 above, the use of the words “governance actors” may have misled the narrator–even an expert–since its meaning might not be clear or be the same as the interviewer's.

Second, the researcher should neither sympathise with the interviewee nor judge the narrator’s statements, because this would transform the interview into another type of social interaction, such as a conversation, an interrogation or a confession (La Mendola 2009 ). The analytical move requires that the researcher does not confuse the interview as social interaction with his or her analysis of the data, because this is a specific, separate moment after the interview is concluded. Whatever material or stories a researcher receives during the interviews, it is eventually up to him or her to decide which representation(s) will be told (and how) (Stake 2005 :456). It is the job of the researcher to perform the necessary analytical work on the collected data.

After the fieldwork, the second stage of the analytical move is a change of state of the interviewees' replies and stories to subsequently “feed in” in QCA. The researcher begins to qualitatively assess and organise the in-depth knowledge, in the form of replies or stories, received by the interviewees through their narrations. This usually involves the (double-)coding of the qualitative material, manually or through the use of dedicated software. The analysis of the qualitative material organises the in-depth knowledge gained through the relational move and sustains the (re)definition of the outcome and conditions, their related attributes and sub-dimensions, for performing QCA.

In recognising the difficulty in integrating qualitative (interview) data into QCA procedures, QCA-researchers have developed templates, tables or tree diagrams to structure the analysed qualitative material into set membership scores (Basurto and Speer 2012 ; Legewie 2017 ; Tóth et al. 2017 ; see also online supplementary material). We call these different templates “ Supports for Membership Representation ” (SMeRs) because they facilitate the passage from conceptualisation (analytical move) to operationalisation into set membership values (membership move). Below, we discuss these templates by placing them along a continuum from “more theory-driven” to “more data-driven” (see Gerrits and Verweij 2018 , ch. 1). Although the studies included below did not use a dialogical approach to interviews, we also examine the SMeRs in terms of their openness towards the collected material. As explained above, we believe it is this openness–at best “dialogical”–that facilitates the development of case intimacy on the side of the researcher. In distinguishing the steps characterising both moves (see Sect.  2 above), below we differentiate the analytical and membership moves.

Basurto and Speer ( 2012 ) were the first develop and present a preliminary but modifiable list of theoretical dimensions for conditions and outcome. Their interview guideline is purposely developed to obtain responses to identify anchor points prior to the interviews and to match fuzzy sets. In our perspective, this contravenes the separation between the relational and analytical move: the researcher deals with interviewees as “objects” whose shared information is fitted to the researchers’ analytical framework. In their analytical move, Basurto and Speer define an ideal and a deviant case–both of them non-observable–to locate, by comparison, their cases and facilitate the assignment of fuzzy-set membership scores (membership move).

Legewie ( 2017 ) proposes a “grid” called Anchored Calibration (AC) by building on Goertz ( 2006 ). In the analytical move, the researcher first structures (sub-)dimensions for each condition and the outcome by means of concept trees. Each concept is then represented by a gradation, which should form conceptual continua (e.g. from low to high) and is organised in a tree diagram to include sub-dimensions of the conditions and outcome. In the membership move, to each “graded” concept, anchor points are assigned (i.e. 0, 0.25, 0.75, 1). The researcher then iteratively matches coded evidence from narrative interviews (analytical move) to the identified anchor points for calibration, thus assigning set membership scores (e.g. 0.33 or 0.67; i.e. membership move). Similar to Basurto and Speer ( 2012 ), the analytical framework of the researcher is given priority to and tightly structures the collected data. Key for anchored calibration is the conceptual neatness of the SMeR, which is advantageous for the researcher but that, following our perspective, allows a limited dialogue with the cases and hence the development of case intimacy.

An alternative route is the one proposed by Tóth et al. ( 2017 ). The authors devise the Generic Membership Evaluation Template (GMET) as a “grid” where qualitative information from the interviews (e.g. quotes) and from the researcher’s interpretative process is included. In the analytical move, their template clearly serves as a “translation support” to represent “meanings” into “numbers”: researchers included information on how they interpreted the evidence (e.g. positive/negative direction/effect on membership of a certain attribute; i.e. analytical move), as well as an explanation of why specific set membership scores have been assigned to cases (i.e. membership move). Tóth et al.’s ( 2017 ) SMeR appears more open to the interviewees’ perspective, as researchers engaged in a mixed-method research process where the moment of data collection–the relational move–is elaborated on (Tóth et al. 2015 ). We find their approach more effective for gaining in-depth knowledge of cases and for supporting the dialogue between theory and data.

Jopke and Gerrits ( 2019 ) discuss routines, concrete procedures and recommendations on how to inductively interpret and code qualitative interview material for subsequent calibration by using a grounded-theory approach. In their analytical move, the authors show how conditions can be constructed from the empirical data collected from interviews; they suggest first performing an open coding of the interview material and then continuing with a theoretical coding (or “closed coding”) that is informed by the categories identified in the previous open coding procedure, before defining set membership scores for cases (i.e. membership move). Similar to Tóth et al. ( 2017 ), Jopke and Gerrits’ ( 2019 ) SMeR engages with the data collection and the gathered qualitative material by being open to what the “data” have to “tell”, hence implementing a strategy for data analysis that is effective to gain in-depth knowledge of cases.

Another type of SMeR is the elaboration of summaries of the interview material by unit of analysis (e.g. urban transformations, participation initiatives, interviewees’ individual careers paths). Rihoux and Lobe ( 2009 ) propose the so-called short case descriptions (SCDs). Footnote 10 As a possible step within the interpretative spiral available to the researcher, short case descriptions (SCDs) are concise summaries that effectively synthesise the most important information sorted by certain identified dimensions, which will then compose the conditions, and their sub-dimensions, for QCA. As a type of SMeR, the summaries consist of a change of state of the qualitative material, because they provide “intermediate” information on the threshold between the coding of the interviews' transcripts and the subsequent assignment of membership scores (the membership move, or calibration) for the outcome and each condition. Furthermore, the writing of short summaries appears to be particularly useful to allow researchers that have already performed narrative interviews to evaluate whether to carry out QCA as a systematic method for comparative analysis. For instance, similar to what Tóth et al. ( 2017 :200) did to reduce interview bias, in our own research interviewees could cover the development of multiple cases, and the use of short summaries helped us compare information per each case across multiple interviewees and spot possible contradictions.

The overall advantage of SMeRs is helping researchers provide an overview of the quality and “patchiness” of available information about the cases per interview (or document). SMeRs can also help spot inconsistencies and contradictions, thus guiding researchers to judge if their data can provide sufficiently homogeneous information for the conditions and outcome composing their QCA-model. This is particularly relevant in case-based QCA research, where descriptive inferences are drawn from the material collected from the selected cases and the degree of its internal validity (Thomann and Maggetti 2017 :361). Additionally, the issue of the “quality” and “quantity” across the available qualitative data (de Block and Vis 2018 ) can be checked ex-ante before embarking on QCA.

For the membership move, the GMET, the AC, grounded theory coding and short summaries supports the qualitative assignment of set membership values from empirical interview data. SMeRs typically include an explanation about why a certain set membership score has been assigned to each case record, and diagrammatically arrange information about the interpretation path that researchers have followed to attribute values. They are hence a true “interface” between qualitative empirical data (“words/meaning”) and set membership values (“numbers”). Each dimension included in SMeRs can also be coupled with direct quotes from the interviews (Basurto and Speer 2012 ; Tóth et al. 2017 ).

In our own research (Pagliarin et al. 2019 ), after having coded the interview narratives, we developed concepts and conditions first by comparing the gathered information through short summaries—similar to short case descriptions (SCDs), see Rihoux and Lobe ( 2009 )—and then by structuring the conditions and indicators in a grid by adapting the template proposed by Tóth et al. ( 2017 ). One of the goals of our research was to identify “external factors or events” affecting the formulation and development of large-scale urban transformations. External (national and international) events (e.g. failed/winning bid for the Olympic Games, fall of Iron Curtain/Berlin wall) do not have an effect per se, but they stimulate actors locally to make a certain decision about project implementation. We were able to gain this knowledge because we adopted a dialogical interviewing style (see Example 3 below). As the narrator is invited to tell us about some of the most relevant projects of urban transformation in Greater Copenhagen in the past 25–30 years, the narrator is free to mention the main factors and actors impacting on Ørestad as an urban transformation.

Example 3 [L]: In this interview, I would propose that you tell me about some of the most relevant projects of urban transformation that have been materialized in Greater Copenhagen in the past 25–30 years. I would like you to tell me about their itinerary of development, step by step, and if possible from where the idea of the project emerged. [N]: Okay, I will try to start in the 80’s. In the 80’s, there was a decline in the city of Copenhagen. (…) In the end of the 80’s and the beginning of the 90’s, there was a political trend. They said, “We need to do something about Copenhagen. It is the only big city in Denmark so if we are going to compete with other cities, we have to make something for Copenhagen so it can grow and be one of the cities that can compete with Amsterdam, Hamburg, Stockholm and Berlin”. I think also it was because of the EU and the market so we need to have something that could compete and that was the wall falling in Berlin. (…) The Berlin Wall, yes. So, at that time, there was a commission to sit down with the municipality and the state and they come with a plan or report. They have 20 goals and the 20 goals was to have a bridge to Sweden, expanding of the airport, a metro in Copenhagen, investment in cultural buildings, investment in education. (…) In the next 5 years, from the beginning of the 90’s to the middle of the 90’s, there were all of these projects more or less decided. (…) The state decided to make the airport, to make the bridge to Sweden, to make… the municipality and the city of Copenhagen decides to make Ørestad and the metro together with the state. So, all these projects that were lined up on the report, it was, let’s decide in the next 5 years. [L]: So, there was a report that decided at the end of the 80’s and in the 90’s…? [N]: Yes, ‘89. (…) To make all these projects, yes. (…). [L]: Actually, one of the projects I would like you to tell me about is the Ørestad. R: Yes. It is the Ørestad. The Ørestad was a transformation… (…).

The factors mentioned by the interviewee corresponded to the main topics of interest by the researcher. In this example, we can also highlight the presence of a “prompt” (Leech 2002 ) or “clue” (La Mendola 2009 ). To keep the narrator on focus, the researcher “brings back” (the original meaning of rapporter ) the interviewee to the main issues of the inter-view by asking “So, there was a report…”.

Following the question formulation as shown in example 3, below we compare the external event(s) impacting the cases of Lyon Part-Dieu in France (Example 4 ) and Scharnhauserpark in Stuttgart in Germany (Example 5 ).

Example 4 [N]: So, Part-Dieu is a transformation of the1970s, to equip [Lyon] with a Central Business District like almost all Western cities, following an encompassing regional plan. This is however not local planning, but it is part of a major national policy. (…) To counterbalance the macrocephaly of Paris, 8 big metropolises were identified to re-balance territorial development at the national level in the face of Paris. (…) including Lyon. (…) The genesis of Part-Dieu is, in my opinion, a real-estate opportunity, and the fact to have military barracks in an area (…) 15 min away from the city centre (…) to reconvert in a business district. Footnote 11
Example 5 [N]: When the American Army left the site in 1992, the city of Ostfildern consisted of five villages. They bought the site and they said, “We plan and build a new centre for our village”, because these are five villages and this is in the very centre. It’s perfectly located, and when they started they had 30,000 inhabitants and now that it’s finished, they have 40,000, so a third of the population were added in the last 20 years by this project. For a small municipality like Ostfildern, it was a tremendous effort and they were pretty good at it. Footnote 12

In the examples above, Lyon Part-Dieu and Scharnhauserpark are unique cases and developed into an area with different functions (a business district and a mixed-use area), but we can identify a similar event: the unforeseen dismantling of military barracks. Both events were considered external factors punctually identifiable in time that triggered the redevelopment of the areas. Instead, in the following illustration about the “Confluence” urban renewal in Lyon, the identified external event relates to a global trend regarding post-industrial cities and the “patchwork” replacement of functions in urban areas:

Example 6 [N]: The Confluence district (…) the wholesale market dismantles and opens an opportunity at the south of the Presqu'Île, so an area extremely well located, we are in the city centre, with water all around because of the Saône and Rhône rivers, so offering a great potential for a high quality of life. However, I say “potential” because there is also a highway passing at the boundary of the neighbourhood. Footnote 13

Although our theoretical framework identified a set of exogenous factors affecting large-scale urban transformations locally, we used the empirical material from our interviews to conceptualise the closing of military barracks and the dismantling of the wholesale market as two different, but similar types of external events, and considered them to be part of the same “external events” condition. In set-theoretic terms, this condition is defined as a “set of projects where external (unforeseen) events or general/international trends had a large impact on project implementation”. The broader set conceptualisation of this condition is possibly not optimal, as it reflects the tension in comparative research to find a balance between capturing cases’ individual histories (case idiosyncrasies) and more concepts that are abstract “enough” to account for cross-case patterns (see Gerrits and Verwej 2018 ; Jopke and Gerrit 2019 ). This is a key challenge of the analytical move.

However, the core of the subsequent membership move is precisely to perform a qualitative assessment to capture these differences by assigning different set-membership values. In the case of Lyon Confluence, where the closing of the whole sale market as external event did happen but did only have a “general” influence on the area’s redevelopment, the case was given a set membership value of 0.33 to this condition. In contrast, the case of Lyon Part-Dieu was given a set membership score of 0.67 to the condition “external events” because a French military area was dismantled, but it was also combined with a national strategy of the French state to redistribute territorial development across France. According to our analysis of the collected qualitative material, it was an advantage that the military area was dismantled but the redevelopment of Part-Dieu would have probably been affected anyway by the overall national territorial strategy. Footnote 14 Finally, the case of Stuttgart Scharnhauserpark case was given full membership (1.00) to the condition, because the US army left the area–which is an indication of a “fully exogenous” event–that truly stimulated urban change in Scharnhauserpark. Footnote 15

Our calibration (membership move) of the three cases illustrated in Examples 4 , 5 and 6 shows that set membership values represent a concept, at times also relatively broad to allow comparison (analytical move), but that they do not replace the specific way (or “meaning”) through which the impact of external factors empirically instantiate in each of the cases discussed in the above examples.

In the interpretative spiral Fig.  1 , there is hence–despite our wishes–no perfect correspondence between meanings and numbers (quantitising) and numbers and meanings (qualitising; see Sandelowski et al. 2009 ). This is a consequence of the constructed nature of social data (see Sect.  2 ). When using qualitative data, fuzzy-sets are “ interpretive tools” to operationalise theoretical concepts (Ragin 2000 :162, original emphasis) and hence are approximations to reality. In other words, set memberships values are token s. Here, we agree with Sandelowski et al. ( 2009 ), who are critical of “the rhetorical appeal of numbers” (p. 208) and the vagaries of ordinal categories in questionnaires (p. 211ff).

Note that calibration by using qualitative data is not blurry or unreliable. On the contrary, its robustness is given by the quality of the dialogue established between researcher and interviewee and by the acknowledgement that the analytical and membership moves are types of representation –as fourth and fifth representation loops. It might hence be possible that QCA researchers using qualitative data have a different research experience of QCA as a research approach and method than QCA researchers using quantitative data.

5 Conclusion

In this study, we critically observed how, so far, qualitative data have been used in few QCA studies, and only a handful use narrative interviews (de Block and Vis 2018 ). This situation is puzzling because qualitative research methods can offer an effective route to gain access to in-depth case knowledge, or case intimacy, considered key to perform QCA.

Besides the higher malleability of quantitative data for set conceptualisation and calibration (here called “analytical” and “membership” moves), we claimed that the limited use of qualitative data in QCA applied research depends on the failure to recognise that the data collection process is a constituent part of QCA “as a research approach”. Qualitative data, such as interviews, focus groups or documents, come in verbal form–hence, less “ready” for calibration than quantitative data–and require a research phase on their own for data collection (here called the “relational move”). The relational, analytical and membership moves form an “interpretative spiral” that hence accounts for the main research phases composing QCA “as a research approach”.

In the relational move, we showed how researchers can gain access to in-depth case-based knowledge, or case intimacy, by adopting a “dialogical” interviewing style (La Mendola 2009 ). First, researchers should be aware of the discrepancy between the interviewee/narrator’s representation and the interviewer/listener’. Second, researchers should establish an “I-thou” relationship with the narrator (Buber 1923 /2010; La Mendola 2009 ). As in a dancing couple, the interviewer/listener should accompany, but not lead, the narrator in the unfolding of her story. These are fundamental routes to make the most of QCA’s qualitative potential as a “close dialogue with cases” (Ragin 2014 :81).

In the analytical and membership moves, researchers code, structure and interpret their data to assign crisp- and fuzzy-set membership values. We examined the variety of templates–what we call Supports for Membership Representation (SMeRs)–designed by QCA-researchers to facilitate the assignment of “numbers” to “words” (Rihoux and Lobe 2009 ; Basurto and Speer 2012 ; Legewie 2017 ; Tóth et al. 2015 , 2017 ; Jopke and Gerrits 2019 ).

Our study did not offer an overarching examination of the research process involved in QCA, but critically focussed on a specific aspect of QCA as a research approach. We focussed on the “translation” of data collected through qualitative research methods (“words” and “meanings”) into set membership values (“numbers”). Hence, in this study the discussion of QCA as a method has been limited.

We hope our paper has been a first contribution to identify and critically examine the “qualitative” character of QCA as a research approach. Further research could identify other relevant moves in QCA as a research approach, especially when non-numerical data are employed and regarding internal and external validity. Other moves and steps could also be identified or clearly labelled in QCA as a method, in particular when assessing limited diversity, skewedness (e.g. “data distribution” step) and the management of true logical contradictions (e.g. “solving contradictions” move). These are all different mo(ve)ments in the full-fledged application of QCA that allow researchers to make sense of their data and to connect “theory” and “evidence”.

As also noted by de Block and Vis ( 2018 ), QCA researchers are not always clear about what they exactly mean with “in-depth” or “open” interviews and how they informed the calibration process (e.g. Verweij and Gerrits, 2015 ), especially when also quantitative data and different coders were used (e.g. Chai and Schoon, 2016 ).

See online appendix.

We are aware that other studies combining narrative interviews and QCA have been carried out, but here we limit our discussion only to already published articles that we are aware of at the time of writing.

Without going into further details on this occasion, the term “dialogical” explicitly refers to the “dialogical epistemology” as discussed by Buber ( 1923 /2008) who distinguishes between an “I-thou” relation and an “I-it” experience. In this perspective, “dialogical” is considered as a synonym of “relational” (i.e. “I-thou” relation).

See footnote.4

The interviewer avoids posing evaluative and typifying questions to the narrator, but the former naturally works through evaluative and typifying research questions.

Copenhagen, Interview 5, September 1, 2016.

Barcelona, Interview 1, June 27, 2016. Translated from the original Spanish.

We take the risk to quote Silverman ( 2017 ) although in his article he warned about extracting and using quotes to support the researchers' arguments.

Gerrits and Verweij ( 2018 ) also emphasise the usefulness of thick case descriptions.

Lyon, Interview 4, October, 13 2016. Translated from the original French.

Stuttgart, Interview 1, July, 18 2016.

Lyon, Interview 1, October 11, 2016. Translated from the original French.

This consideration also relates to the interdependence, and not necessarily independence, of conditions in QCA, which is a topic that is beyond the scope of this study (see e.g. Jopke and Gerrits 2019 ).

For a discussion regarding the “absence” of possible factors from the interviewees' narrations, we refer readers to Sandelowski et al. ( 2009 ) and de Block and Vis ( 2018 ). In general, data triangulation is a good strategy to deal with partial and even contradictory information collected from multiple interviewees. For our own strategy regarding data triangulation, we also used an online questionnaire, additional literature and site visits (Pagliarin et al. 2019 ).

Basurto, X., Speer, J.: Structuring the calibration of qualitative data as sets for qualitative comparative analysis (QCA). Field Methods 24 , 155–174 (2012)

Article   Google Scholar  

Becker, H.S.: Tricks of the Trade: HOW to Think About Your Research While You’re Doing It. University Press, Chicago (1998)

Book   Google Scholar  

Brinkmann, S.: Unstructured and Semistructured Interviewing. In: Leavy, P. (ed.) The Oxford Handbook of Qualitative Research, pp. 277–300. University Press, Oxford (2014)

Google Scholar  

Buber M (1923/2010) I and Thou. Charles Scribner's Sons New York

Byrne, D.: Complexity configurations and cases. Theory Cult. Soc. 22 (5), 95–111 (2005)

Chai, Y., Schoon, M.: Institutions and government efficiency: decentralized Irrigation management in China. Int. J. Commons 10 (1), 21–44 (2016)

Chase, S.E.: Narrative inquiry: multiple lenses, approaches, voices. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 631–679. Sage, Thousand Oaks, CA (2005)

Collier, D.: Symposium: The Set-Theoretic Comparative Method—Critical Assessment and the Search for Alternatives. ID 2463329, SSRN Scholarly Paper, 1 July. Rochester, NY: Social Science Research Network. Available at: https://papers-ssrn-com.eur.idm.oclc.org/abstract=2463329 (Accessed 9 March 2021). (2014)

de Block, D., Vis, B.: Addressing the challenges related to transforming qualitative into quantitative data in qualitative comparative analysis. J. Mixed Methods Res. 13 , 503–535 (2018). https://doi.org/10.1177/1558689818770061

Fischer, M.: Institutions and coalitions in policy processes: a cross-sectoral comparison. J. Publ. Policy 35 , 245–268 (2015)

Geertz, C.: The Interpretation of Cultures. Basic Books, New York (1973)

Gerrits, L., Verweij, S.: The Evaluation of Complex Infrastructure Projects. A Guide to Qualitative Comparative Analysis. Edward Elgar, Cheltenham UK (2018)

Goertz, G.: Social Science Concepts. A User’s Guide. University Press, Princeton (2006)

Greckhamer, T., Misangyi, V.F., Fiss, P.C.: Chapter 3 the two QCAs: from a small-N to a large-N set theoretic approach, In Fiss, P.C., Cambré, B. and Marx, A. (Eds.), Configurational theory and methods in organizational research (Research in the Sociology of Organizations, Vol. 38), Emerald Group Publishing Limited, Bingley, pp. 49–75. https://doi.org/10.1108/S0733-558X(2013)0000038007 (2013)

Guba, E.G., Lincoln, Y.S.: Paradigmatic controversies, contradictions and emerging confluences. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 191–215. Sage, Thousand Oaks, CA (2005)

Harvey, D.L.: Complexity and case D. In: Byrne, Ragin, C.C. (eds.) The SAGE Handbook of Case-Based Methods, pp. 15–38. SAGE Publications Inc, London (2009)

Chapter   Google Scholar  

Henik, E.: Understanding whistle-blowing: a set-theoretic approach. J. Bus. Res. 68 , 442–450 (2015)

Jopke, N., Gerrits, L.: Constructing cases and conditions in QCA – lessons from grounded theory. Int. J. Soc. Res. Methodol. 22 (6), 599–610(2019). https://doi.org/10.1080/13645579.2019.1625236

La Mendola, S.: Centrato e Aperto: dare vita a interviste dialogiche [Centred and Open: Give life to dialogical interviews]. UTET Università, Torino (2009)

Latour, B.: When things strike back: a possible contribution of ‘science studies’ to the social sciences. Br. J. Sociol. 51 , 107–123 (2000)

Leech, B.L.: Asking questions: Techniques for semistructured interviews. Polit. Sci. Polit. 35 , 665–668 (2002)

Legard, R., Keegan, J., Ward, K.: In-depth interviews. In: Richie, J., Lewis, J. (eds.) Qualitative Research Practice, pp. 139–168. Sage, London (2003)

Legewie, N.: Anchored Calibration: From qualitative data to fuzzy sets. In: Forum Qualitative Sozialforschung / Forum: Qualitative Social Research 18 (3), 14 (2017). https://doi.org/10.17169/fqs-18.3.2790

Lucas, S.R., Szatrowski, A.: Qualitative comparative analysis in critical perspective. Sociol. Methodol. 44 (1), 1–79 (2014)

Metelits, C.M.: The consequences of rivalry: explaining insurgent violence using fuzzy sets. Polit. Res. q. 62 , 673–684 (2009)

Pagliarin, S., Hersperger, A.M., Rihoux, B.: Implementation pathways of large-scale urban development projects (lsUDPs) in Western Europe: a qualitative comparative analysis (QCA). Eur. Plan. Stud. 28 , 1242–1263 (2019). https://doi.org/10.1080/09654313.2019.1681942

Ragin, C.C.: The Comparative Method. Moving Beyond Qualitative and Quantitative Strategies. University of California Press, Berkeley and Los Angeles (1987)

Ragin, C.C.: Fuzzy-Set Social Science. University Press, Chicago (2000)

Ragin, C.C.: Redesigning Social Inquiry. Fuzzy Sets and Beyond. University Press, Chicago (2008a)

Ragin, C.C.: Fuzzy sets: calibration versus measurement. In: Collier, D., Brady, H., Box-Steffensmeier, J. (eds.) Methodology Volume of Oxford Handbooks of Political Science, pp. 174–198. University Press, Oxford (2008b)

Ragin, C.C.: Comment: Lucas and Szatrowski in Critical Perspective. Sociol. Methodol. 44 (1), 80–94 (2014)

Rapley, T.J.: The art (fulness) of open-ended interviewing: some considerations on analysing interviews. Qual. Res. 1 (3), 303–323 (2001)

Rihoux, B., Ragin, C. (eds.): Configurational Comparative Methods. Qualitative Comparative Analysis (QCA) and related Techniques. Sage, Thousand Oaks, CA (2009)

Rihoux, B., Lobe, B.: The case for qualitative comparative analysis (QCA): adding leverage for thick cross-case comparison. In: Byrne, D., Ragin, C.C. (eds.) The SAGE Handbook of Case-Based Methods, pp. 222–242. SAGE Publications Inc, London (2009)

Roulston, K.: Qualitative interviewing and epistemics. Qual. Res. 18 (3), 322–341 (2018)

Sandelowski, M., Voils, C.I., Knafl, G.: On quantitizing. J. Mixed Methods Res. 3 , 208–222 (2009)

Sayer, A.: Method in Social Science. A Realist Approach. Routledge, London (1992)

Schneider, C.Q., Wagemann, C.: Set-Theoretic Methods for the Social Sciences. A Guide to Qualitative Comparative Analysis. University Press, Cambridge (2012)

Silverman, D.: How was it for you? The Interview Society and the irresistible rise of the (poorly analyzed) interview. Qual. Res. 17 (2), 144–158 (2017)

Spencer, R., Pryce, J.M., Walsh, J.: Philosophical approaches to qualitative research. In: Leavy, P. (ed.) The Oxford Handbook of Qualitative Research, pp. 81–98. University Press, Oxford (2014)

Spradley, J.P.: The ethnographic interview. Holt Rinehart and Winston, New York (1979)

Stake, R.E.: Qualitative case studies. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage Handbook of Qualitative Research, pp. 443–466. Sage, Thousand Oaks, CA (2005)

Thomann, E., Maggetti, M.: Designing research with qualitative comparative analysis (QCA): approaches, challenges, and tools. Sociol. Methods Res. 49 (2), 356–386 (2017)

Tóth, Z., Thiesbrummel, C., Henneberg, S.C., Naudé, P.: Understanding configurations of relational attractiveness of the customer firm using fuzzy set QCA. J. Bus. Res. 68 (3), 723–734 (2015)

Tóth, Z., Henneberg, S.C., Naudé, P.: Addressing the ‘qualitative’ in fuzzy set qualitative comparative analysis: the generic membership evaluation template. Ind. Mark. Manage. 63 , 192–204 (2017)

Verweij, S., Gerrits, L.M.: How satisfaction is achieved in the implementation phase of large transportation infrastructure projects: a qualitative comparative analysis into the A2 tunnel project. Public W. Manag. Policy 20 , 5–28 (2015)

Wang, W.: Exploring the determinants of network effectiveness: the case of neighborhood governance networks in Beijing. J. Public Adm. Res. Theory 26 , 375–388 (2016)

Download references

Acknowledgements

The authors would like to thank the two reviewers who provided great insights and careful remarks, thus allowing us to improve the quality of the manuscript. During a peer-review process lasting for more than 2 years, we intensely felt the pushes and slows, and at times the impasses, of a fruitful dialogue on the qualitative and quantitative aspects of comparative analysis in the social sciences.

Open Access funding enabled and organized by Projekt DEAL. This research has been partially funded through the Consolidator Grant (ID: BSCGIO 157789), held by Prof. h. c. Dr. Anna M. Hersperger, provided by the Swiss National Science Foundation.

Author information

Authors and affiliations.

Utrecht University School of Governance, Utrecht University, Utrecht, The Netherlands

Barbara Vis

Department of Philosophy, Sociology, Pedagogy and Applied Psychology, Padua University, Padua, Italy

Salvatore La Mendola

Chair for the Governance of Complex and Innovative Technological Systems, Otto-Friedrich-Universität Bamberg, Bamberg, Germany

Sofia Pagliarin

Landscape Ecology Research Unit, CONCUR Project, Swiss Federal Research Institute WSL, Birmensdorf, Zurich, Switzerland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sofia Pagliarin .

Ethics declarations

Conflict of interest.

The Authors declare not to have any conflict of interest to report.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 21 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Pagliarin, S., La Mendola, S. & Vis, B. The “qualitative” in qualitative comparative analysis (QCA): research moves, case-intimacy and face-to-face interviews. Qual Quant 57 , 489–507 (2023). https://doi.org/10.1007/s11135-022-01358-0

Download citation

Accepted : 20 February 2022

Published : 26 March 2022

Issue Date : February 2023

DOI : https://doi.org/10.1007/s11135-022-01358-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Calibration
  • Data generation
  • Interviewing
  • In-depth knowledge
  • Qualitative data
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Understanding Qualitative Research: An In-Depth Study Guide

    comparative case studies qualitative research

  2. Comparative Analysis Of Qualitative Research Methods

    comparative case studies qualitative research

  3. 3: Visual representation of the process of comparative cross case

    comparative case studies qualitative research

  4. What is Comparative Research? Definition, Types, Uses

    comparative case studies qualitative research

  5. 🏷️ Key differences between qualitative and quantitative research

    comparative case studies qualitative research

  6. Qualitative Research: Definition, Types, Methods and Examples

    comparative case studies qualitative research

VIDEO

  1. Qualitative Research Designs

  2. Qualitative Methods

  3. Case Study Research

  4. Exploring Research Methodologies in the Social Sciences (4 Minutes)

  5. Case Study Research: Design and Methods

  6. Qualitative Research: Design Basics

COMMENTS

  1. Comparative Case Studies: An Innovative Approach

    In this article, we argue for a new approach—the comparative case study approach—that attends simultaneously to macro, meso, and micro dimensions of case-based research. The approach engages ...

  2. Approaches to Qualitative Comparative Analysis and good practices: A

    As applied qualitative (case-based) and mixed-methods research combining different analytical methods in the social sciences is not yet as standardized as research in the quantitative tradition, it is striving to become ever more methodologically sophisticated, rigorous, and transparent (Adcock & Collier, 2001; Brady & Collier, 2010; Mahoney ...

  3. Comparative Case Studies: Methodological Discussion

    In the past, comparativists have oftentimes regarded case study research as an alternative to comparative studies proper. At the risk of oversimplification: methodological choices in comparative and international education (CIE) research, from the 1960s onwards, have fallen primarily on either single country (small n) contextualized comparison, or on cross-national (usually large n, variable ...

  4. PDF Comparing the Five Approaches

    Case study research has experienced growing recognition during the past 30 years, evidenced by its more frequent application in published research and increased avail-ability of reference works (e.g., Thomas, 2015; Yin, 2014). Encouraging the use of case study research is an expressed goal of the editors of the recent . Encyclopedia of Case Study

  5. case selection and the comparative method: introducing the case

    The first of these problems is far less of a concern today. A robust conversation about methodology has been at the heart of the discipline for more than two decades, and tremendous progress has been made in qualitative, quantitative, and formal methodologies. 2 While the push toward more sophisticated qualitative research designs has somewhat displaced the comparative method (see Brady and ...

  6. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  7. Comparison in Qualitative Research

    However, with either logic of comparison, three dangers merit attention: decontextualization, commensurability, and ethnocentrism. One promising research heuristic that attends to different logics of comparison while avoiding these dangers is the comparative case study (CCS) approach. CCS entails three axes of comparison.

  8. A Practical Guide to the Comparative Case Study Method in ...

    KEY WORDS: case study, comparative method, research design. ... using qualitative research methods, of a single social phenomenon" (p. 2). Lijphart (1971, 1975) saw the case study as a single case that is closely associated with the comparative method, as contrasted with experimental and statistical methods. George and

  9. Case selection and causal inferences in qualitative comparative research

    Case-selection and qualitative comparisons. Methodological advice on the selection of cases in qualitative research stands in a long tradition. John Stuart Mill in his A System of Logic, first published in 1843, proposed five methods meant to enable researchers to make causal inferences: the method of agreement, the method of difference, the double method of agreement and difference, the ...

  10. Comparative Case Study Research

    Summary. Case studies in the field of education often eschew comparison. However, when scholars forego comparison, they are missing an important opportunity to bolster case studies' theoretical generalizability. Scholars must examine how disparate epistemologies lead to distinct kinds of qualitative research and different notions of comparison.

  11. (PDF) Qualitative Comparative Analysis: An Introduction to Research

    Considering the detailed qualitative case studies that inform this study's QCA design, I use a demanding consistency cutoff of 0.9. 27 This procedure does not predetermine the result, which still ...

  12. Rethinking Case Study Research

    Comparative case studies are an effective qualitative tool for researching the impact of policy and practice in various fields of social research, including education. Developed in response to the inadequacy of traditional case study approaches, comparative case studies are highly effective because of their ability to synthesize information ...

  13. Comparative Research Methods

    Comparative Case Study Analysis. Mono-national case studies can contribute to comparative research if they are composed with a larger framework in mind and follow the Method of Structured, Focused Comparison (George & Bennett, 2005). For case studies to contribute to cumulative development of knowledge and theory they must all explore the same ...

  14. Between context and comparability: Exploring new solutions for a

    Thus, the article discusses recent advancements in the methodology of qualitative international comparative research, connects them to older analytical methods that have been used within the field in the 1960s and 1970s, and demonstrates their analytical value based on their application to a qualitative small-N case study on research groups in ...

  15. Qualitative Comparative Analysis (QCA): A Classic Mixed ...

    Qualitative comparative analysis (QCA) and fuzzy set causality analysis are comparative case-oriented research approaches. The basic technique was developed by Charles Ragin, with key presentations by Rihoux and Ragin (), Thiem and Duşa (2013a, b), Schneider and Wagemann (), and Rihoux and DeMeur ().A case-study comparative analysis will have a series of cases, all somewhat similar or ...

  16. Rethinking case study research: A comparative approach

    Comparative case studies are an effective qualitative tool for researching the impact of policy and practice in various fields of social research, including education. Developed in response to the inadequacy of traditional case study approaches, comparative case studies are highly effective because of their ability to synthesize information ...

  17. Qualitative Comparative Analysis

    for subjects on comparative politics and historical sociology during the late 1980s and early 1990s. Its initial purpose was to empirically examine a limited number of macrolevel phenomena that are relatively large for comparative case study (qualitative) and yet too small for statistical (quantitative) research designs.

  18. Qualitative comparative analysis

    Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest. QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome.

  19. Designing Research With Qualitative Comparative Analysis (QCA

    Martino Maggetti is an associate professor in political science at the Institute of Political, Historical and International Studies (IEPHI) of the University of Lausanne, Switzerland. His research interests mainly focus on public policy and regulatory governance. His QCA-oriented research articles have appeared in several journals, including Business & Society, Journal of Comparative Policy ...

  20. [PDF] Comparative Case Studies

    Next, we propose a new approach - the comparative case study approach - that attends simultaneously to global, national, and local dimensions of case-based research. We contend that new approaches are necessitated by conceptual shifts in the social sciences, specifically in relation to culture, context, space, place, and…. Expand.

  21. Using qualitative comparative analysis to understand and quantify

    Often, translation or implementation studies use case study methods with small sample sizes. Methodological approaches that systematize findings from these types of studies are needed to improve rigor and advance the field. Qualitative comparative analysis (QCA) is a method and analytical approach that can advance implementation science.

  22. The "qualitative" in qualitative comparative analysis (QCA): research

    Qualitative Comparative Analysis (QCA) is a configurational comparative research approach and method for the social sciences based on set-theory. It was introduced in crisp-set form by Ragin and later expanded to fuzzy sets (Ragin 2000; 2008a; Rihoux and Ragin 2009; Schneider and Wagemann 2012).QCA is a diversity-oriented approach extending "the single-case study to multiple cases with an ...

  23. COMPARATIVE RESEARCH METHODS (Chapter 15)

    There is a wide divide between quantitative and qualitative approaches in comparative work. Most studies are either exclusively qualitative (e.g., individual case studies of a small number of countries) or exclusively quantitative, most often using many cases and a cross-national focus (Ragin, 1991:7).

  24. Adopting agile in government: a comparative case study

    Future research could study the impact of the behaviour of proximate organizations as suggested by De Vries and colleagues (Citation 2018). Conclusion Our empirical investigation of agile translations in public administrations reveals that while the initiation of a cultural shift is a pivotal starting point, the true transformative potential ...

  25. Examining Urban Governance of Shrinking Cities at the National, State

    This research seeks to explore sustainable strategies for the shrinking cities in South Korea by delving into three case studies with qualitative analysis of interviews with stakeholders from each municipality. ... Examining Urban Governance of Shrinking Cities at the National, State, and Local Level: A Comparative Case Study of Three Shrinking ...

  26. Investigating social sustainability practices in global ...

    Given the paucity of research that examines the impact of Social Sustainability Practices (SSP) on performance outcomes, this study investigates the adoption and influence of these practices on firms' and suppliers' overall performance, whilst also considering the level of complexity of operational environments of global supply networks. A Fuzzy-sets Qualitative Comparative Analysis (FsQCA ...

  27. Exploring variations in the implementation of a health system level

    Background Despite growing literature, few studies have explored the implementation of policy interventions to reduce maternal and perinatal mortality in low- and middle-income countries (LMICs). Even fewer studies explicitly articulate the theoretical approaches used to understand contextual influences on policy implementation. This under-use of theory may account for the limited ...