Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 29 May 2014

Points of significance

Designing comparative experiments

  • Martin Krzywinski 1 &
  • Naomi Altman 2  

Nature Methods volume  11 ,  pages 597–598 ( 2014 ) Cite this article

49k Accesses

14 Citations

8 Altmetric

Metrics details

  • Research data
  • Statistical methods

Good experimental designs limit the impact of variability and reduce sample-size requirements.

You have full access to this article via your institution.

In a typical experiment, the effect of different conditions on a biological system is compared. Experimental design is used to identify data-collection schemes that achieve sensitivity and specificity requirements despite biological and technical variability, while keeping time and resource costs low. In the next series of columns we will use statistical concepts introduced so far and discuss design, analysis and reporting in common experimental scenarios.

In experimental design, the researcher-controlled independent variables whose effects are being studied (e.g., growth medium, drug and exposure to light) are called factors. A level is a subdivision of the factor and measures the type (if categorical) or amount (if continuous) of the factor. The goal of the design is to determine the effect and interplay of the factors on the response variable (e.g., cell size). An experiment that considers all combinations of N factors, each with n i levels, is a factorial design of type n 1 × n 2 × ... × n N . For example, a 3 × 4 design has two factors with three and four levels each and examines all 12 combinations of factor levels. We will review statistical methods in the context of a simple experiment to introduce concepts that apply to more complex designs.

Suppose that we wish to measure the cellular response to two different treatments, A and B, measured by fluorescence of an aliquot of cells. This is a single factor (treatment) design with three levels (untreated, A and B). We will assume that the fluorescence (in arbitrary units) of an aliquot of untreated cells has a normal distribution with μ = 10 and that real effect sizes of treatments A and B are d A = 0.6 and d B = 1 (A increases response by 6% to 10.6 and B by 10% to 11). To simulate variability owing to biological variation and measurement uncertainty (e.g., in the number of cells in an aliquot), we will use σ = 1 for the distributions. For all tests and calculations we use α = 0.05.

We start by assigning samples of cell aliquots to each level ( Fig. 1a ). To improve the precision (and power) in measuring the mean of the response, more than one aliquot is needed 1 . One sample will be a control (considered a level) to establish the baseline response, and capture biological and technical variability. The other two samples will be used to measure response to each treatment. Before we can carry out the experiment, we need to decide on the sample size.

figure 1

( a ) Two treated samples (A and B) with n = 17 are compared to a control (C) with n = 17 and to each other using two-sample t -tests. ( b ) Simulated means and P values for samples in a . Values are drawn from normal populations with σ = 1 and mean response of 10 (C), 10.6 (A) and 11 (B). ( c ) The preferred reporting method of results shown in b , illustrating difference in means with CIs, P values and effect size, d . All error bars show 95% CI.

We can fall back to our discussion about power 1 to suggest n . How large an effect size ( d ) do we wish to detect and at what sensitivity? Arbitrarily small effects can be detected with large enough sample size, but this makes for a very expensive experiment. We will need to balance our decision based on what we consider to be a biologically meaningful response and the resources at our disposal. If we are satisfied with an 80% chance (the lowest power we should accept) of detecting a 10% change in response, which corresponds to the real effect of treatment B ( d B = 1), the two-sample t -test requires n = 17. At this n value, the power to detect d A = 0.6 is 40%. Power calculations are easily computed with software; typically inputs are the difference in means (Δ μ ), standard deviation estimate ( σ ), α and the number of tails (we recommend always using two-tailed calculations).

Based on the design in Figure 1a , we show the simulated samples means and their 95% confidence interval (CI) in Figure 1b . The 95% CI captures the mean of the population 95% of the time; we recommend using it to report precision. Our results show a significant difference between B and control (referred to as B/C, P = 0.009) but not for A/C ( P = 0.18). Paradoxically, testing B/A does not return a significant outcome ( P = 0.15). Whenever we perform more than one test we should adjust the P values 2 . As we only have three tests, the adjusted B/C P value is still significant, P ′ = 3 P = 0.028. Although commonly used, the format used in Figure 1b is inappropriate for reporting our results: sample means, their uncertainty and P values alone do not present the full picture.

A more complete presentation of the results ( Fig. 1c ) combines the magnitude with uncertainty (as CI) in the difference in means. The effect size, d , defined as the difference in means in units of pooled standard deviation, expresses this combination of measurement and precision in a single value. Data in Figure 1c also explain better that the difference between a significant result (B/C, P = 0.009) and a nonsignificant result (A/C, P = 0.18) is not always significant (B/A, P = 0.15) 3 . Significance itself is a hard boundary at P = α , and two arbitrarily close results may straddle it. Thus, neither significance itself nor differences in significance status should ever be used to conclude anything about the magnitude of the underlying differences, which may be very small and not biologically relevant.

CIs explicitly show how close we are to making a positive inference and help assess the benefit of collecting more data. For example, the CIs of A/C and B/C closely overlap, which suggests that at our sample size we cannot reliably distinguish between the response to A and B ( Fig. 1c ). Furthermore, given that the CI of A/C just barely crosses zero, it is possible that A has a real effect that our test failed to detect. More information about our ability to detect an effect can be obtained from a post hoc power analysis, which assumes that the observed effect is the same as the real effect (normally unknown), and uses the observed difference in means and pooled variance. For A/C, the difference in means is 0.48 and the pooled s.d. ( s p ) = 1.03, which yields a post hoc power of 27%; we have little power to detect this difference. Other than increasing sample size, how could we improve our chances of detecting the effect of A?

Our ability to detect the effect of A is limited by variability in the difference between A and C, which has two random components. If we measure the same aliquot twice, we expect variability owing to technical variation inherent in our laboratory equipment and variability of the sample over time ( Fig. 2a ). This is called within-subject variation, σ wit . If we measure two different aliquots with the same factor level, we also expect biological variation, called between-subject variation, σ bet , in addition to the technical variation ( Fig. 2b ). Typically there is more biological than technical variability ( σ bet > σ wit ). In an unpaired design, the use of different aliquots adds both σ wit and σ bet to the measured difference ( Fig. 2c ). In a paired design, which uses the paired t -test 4 , the same aliquot is used and the impact of biological variation ( σ bet ) is mitigated ( Fig. 2c ). If differences in aliquots ( σ bet ) are appreciable, variance is markedly reduced (to within-subject variation) and the paired test has higher power.

figure 2

( a ) Limits of measurement and technical precision contribute to σ wit (gray circle) observed when the same aliquot is measured more than once. This variability is assumed to be the same in the untreated and treated condition, with effect d on aliquot x and y . ( b ) Biological variation gives rise to σ bet (green circle). ( c ) Paired design uses the same aliquot for both measurements, mitigating between-subject variation.

The link between σ bet and σ wit can be illustrated by an experiment to evaluate a weight-loss diet in which a control group eats normally and a treatment group follows the diet. A comparison of the mean weight after a month is confounded by the initial weights of the subjects in each group. If instead we focus on the change in weight, we remove much of the subject variability owing to the initial weight.

If we write the total variance as σ 2 = σ wit 2 + σ bet 2 , then the variance of the observed quantity in Figure 2c is 2 σ 2 for the unpaired design but 2 σ 2 (1 – ρ ) for the paired design, where ρ = σ bet 2 / σ 2 is the correlation coefficient (intraclass correlation). The relative difference is captured by ρ of two measurements on the same aliquot, which must be included because the measurements are no longer independent. If we ignore ρ in our analysis, we will overestimate the variance and obtain overly conservative P values and CIs. In the case where there is no additional variation between aliquots, there is no benefit to using the same aliquot: measurements on the same aliquot are uncorrelated ( ρ = 0) and variance of the paired test is the same as the variance of the unpaired. In contrast, if there is no variation in measurements on the same aliquot except for the treatment effect ( σ wit = 0), we have perfect correlation ( ρ = 1). Now, the difference measurement derived from the same aliquot removes all the noise; in fact, a single pair of aliquots suffices for an exact inference. Practically, both sources of variation are present, and it is their relative size—reflected in ρ —that determines the benefit of using the paired t-test.

We can see the improved sensitivity of the paired design ( Fig. 3a ) in decreased P values for the effects of A and B ( Fig. 3b versus Fig. 1b ). With the between-subject variance mitigated, we now detect an effect for A ( P = 0.013) and an even lower P value for B ( P = 0.0002) ( Fig. 3b ). Testing the difference between ΔA and ΔB requires the two-sample t -test because we are testing different aliquots, and this still does not produce a significant result ( P = 0.18). When reporting paired-test results, sample means ( Fig. 3b ) should never be shown; instead, the mean difference and confidence interval should be shown ( Fig. 3c ). The reason for this comes from our discussion above: the benefit of pairing comes from reduced variance because ρ > 0, something that cannot be gleaned from Figure 3b . We illustrate this in Figure 3c with two different sample simulations with same sample mean and variance but different correlation, achieved by changing the relative amount of σ bet 2 and σ wit 2 . When the component of biological variance is increased, ρ is increased from 0.5 to 0.8, total variance in difference in means drops and the test becomes more sensitive, reflected by the narrower CIs. We are now more certain that A has a real effect and have more reason to believe that the effects of A and B are different, evidenced by the lower P value for ΔB/ΔA from the two-sample t -test (0.06 versus 0.18; Fig. 3c ). As before, P values should be adjusted with multiple-test correction.

figure 3

( a ) The same n = 17 sample is used to measure the difference between treatment and background (ΔA = A after − A before , ΔB = B after − B before ), analyzed with the paired t -test. Two-sample t -test is used to compare the difference between responses (ΔB versus ΔA). ( b ) Simulated sample means and P values for measurements and comparisons in a . ( c ) Mean difference, CIs and P values for two variance scenarios, σ bet 2 / σ wit 2 of 1 and 4, corresponding to ρ of 0.5 and 0.8. Total variance was fixed: σ bet 2 + σ wit 2 = 1. All error bars show 95% CI.

The paired design is a more efficient experiment. Fewer aliquots are needed: 34 instead of 51, although now 68 fluorescence measurements need to be taken instead of 51. If we assume σ wit = σ bet ( ρ = 0.5; Fig. 3c ), we can expect the paired design to have a power of 97%. This power increase is highly contingent on the value of ρ . If σ wit is appreciably larger than σ bet (i.e., ρ is small), the power of the paired test can be lower than for the two-sample variant. This is because total variance remains relatively unchanged (2 σ 2 (1 – ρ ) ≈ 2 σ 2 ) while the critical value of the test statistic can be markedly larger (particularly for small samples) because the number of degrees of freedom is now n – 1 instead of 2( n – 1). If the ratio of σ bet 2 to σ wit 2 is 1:4 ( ρ = 0.2), the paired test power drops from 97% to 86%.

To analyze experimental designs that have more than two levels, or additional factors, a method called analysis of variance is used. This generalizes the t -test for comparing three or more levels while maintaining better power than comparing all sets of two levels. Experiments with two or more levels will be our next topic.

Krzywinski, M.I. & Altman, N. Nat. Methods 10 , 1139–1140 (2013).

Article   CAS   Google Scholar  

Krzywinski, M.I. & Altman, N. Nat. Methods 11 , 355–356 (2014).

Gelman, A. & Stern, H. Am. Stat. 60 , 328–331 (2006).

Article   Google Scholar  

Krzywinski, M.I. & Altman, N. Nat. Methods 11 , 215–216 (2014).

Download references

Author information

Authors and affiliations.

Martin Krzywinski is a staff scientist at Canada's Michael Smith Genome Sciences Centre.,

  • Martin Krzywinski

Naomi Altman is a Professor of Statistics at The Pennsylvania State University.,

  • Naomi Altman

You can also search for this author in PubMed   Google Scholar

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Krzywinski, M., Altman, N. Designing comparative experiments. Nat Methods 11 , 597–598 (2014). https://doi.org/10.1038/nmeth.2974

Download citation

Published : 29 May 2014

Issue Date : June 2014

DOI : https://doi.org/10.1038/nmeth.2974

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Sources of variation.

Nature Methods (2015)

ETD Outperforms CID and HCD in the Analysis of the Ubiquitylated Proteome

  • Tanya R. Porras-Yakushi
  • Michael J. Sweredoski

Journal of the American Society for Mass Spectrometry (2015)

Analysis of variance and blocking

Nature Methods (2014)

Nested designs

  • Paul Blainey

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

comparative experimental research design

Sac State Library

  • My Library Account
  • Articles, Books & More
  • Course Reserves
  • Site Search
  • Advanced Search
  • Sac State Library
  • Research Guides

Research Methods Simplified

Comparative method/quasi-experimental.

  • Quantitative Research
  • Qualitative Research
  • Primary, Seconday and Tertiary Research and Resources
  • Definitions
  • Sources Consulted

Comparative method or quasi-experimental ---a method used to describe similarities and differences in variables in two or more groups in a natural setting, that is, it resembles an experiment as it uses manipulation but lacks random assignment of individual subjects. Instead it uses existing groups.  For examples see http://www.education.com/reference/article/quasiexperimental-research/#B

  • << Previous: Qualitative Research
  • Next: Primary, Seconday and Tertiary Research and Resources >>
  • Last Updated: Jul 3, 2024 2:35 PM
  • URL: https://csus.libguides.com/res-meth

Comparative Designs

  • First Online: 18 January 2019

Cite this chapter

comparative experimental research design

  • Oddbjørn Bukve 2  

1038 Accesses

A comparative design involves studying variation by comparing a limited number of cases without using statistical probability analyses. Such designs are particularly useful for knowledge development when we lack the conditions for control through variable-centred, quasi-experimental designs. Comparative designs often combine different research strategies by using one strategy to analyse properties of a single case and another strategy for comparing cases. A common combination is the use of a type of case design to analyse within the cases, and a variable-centred design to compare cases. Case-oriented approaches can also be used for analysis both within and between cases. Typologies and typological theories play an important role in such a design. In this chapter I discuss the two types separately.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Later Ragin has developed the method to make it possible to use continuous variables and a probability logic, so-called fuzzy-set logic (Ragin, 2000 ).

Boolean algebra describes logical relations in a similar way that ordinary algebra describes numeric relations.

Bukve, O. (2001). Lokale utviklingsnettverk ein komparativ analyse av næringsutvikling i åtte kommunar. Høgskulen i Sogn og Fjordane, Sogndal.

Google Scholar  

Collier, R. B., & Collier, D. (1991). Shaping the political arena: Critical junctures, the labor movement, and regime dynamics in Latin America . Princeton, NJ: Princeton University Press.

Dion, D. (1998). Evidence and inference in the comparative case study. (Case studies in politics). Comparative Politics, 30 , 127.

Article   Google Scholar  

George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences . Cambridge, MA: MIT Press.

Goggin, M. L. (1986). The “too few cases/too many variables” problem in implementation research. The Western Political Quarterly, 39 , 328–347.

Landman, T. (2008). Issues and methods in comparative politics: An introduction (3rd ed.). Milton Park, Abingdon, Oxon: Routledge.

Book   Google Scholar  

Lange, M. (2013). Comparative-historical methods . Los Angeles: Sage.

Luebbert, G. M. (1991). Liberalism, fascism, or social democracy: Social classes and the political origins of regimes in interwar Europe . New York: Oxford University Press.

Matland, R. E. (1995). Synthesizing the implementation literature: The ambiguity-conflict model of policy implementation. Journal of Public Administration Research and Theory: J-PART, 5 (2), 145–174.

Paige, J. (1975). Agrarian revolution: Social movements and export agriculture in the underdeveloped world . New York: Free Press.

Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry . New York: Wiley.

Ragin, C. C. (1987). The comparative method . Berkeley, CA: University of California Press.

Ragin, C. C. (2000). Fuzzy-set social science . Chicago, IL: University of Chicago Press.

Ragin, C. C., & Amoroso, L. M. (2011). Constructing social research . Thousand Oaks, CA: Pine Forge Press.

Skocpol, T. (1979). States and social revolutions: A comparative analysis of France, Russia, and China . Cambridge: Cambridge University Press.

Weber, M. (1971). Makt og byråkrati: essays om politikk og klasse, samfunnsforskning og verdier . Oslo, Norway: Gyldendal.

Wickham-Crowley, T. P. (1992). Guerrillas and revolution in Latin America: A comparative study of insurgents and regimes since 1956 . Princeton, NJ: Princeton University Press.

Download references

Author information

Authors and affiliations.

Western Norway University of Applied Sciences, Sogndal, Norway

Oddbjørn Bukve

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s)

About this chapter

Bukve, O. (2019). Comparative Designs. In: Designing Social Science Research. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-03979-0_9

Download citation

DOI : https://doi.org/10.1007/978-3-030-03979-0_9

Published : 18 January 2019

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-03978-3

Online ISBN : 978-3-030-03979-0

eBook Packages : Political Science and International Studies Political Science and International Studies (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

  • < Back to search results
  • Design of Comparative Experiments

Design of Comparative Experiments

comparative experimental research design

  • Get access Buy a print copy Check if you have access via personal or institutional login Log in Register
  • Cited by 175

Crossref logo

This Book has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar
  • R. A. Bailey , Queen Mary University of London
  • Export citation
  • Buy a print copy

Book description

This book should be on the shelf of every practising statistician who designs experiments. Good design considers units and treatments first, and then allocates treatments to units. It does not choose from a menu of named designs. This approach requires a notation for units that does not depend on the treatments applied. Most structure on the set of observational units, or on the set of treatments, can be defined by factors. This book develops a coherent framework for thinking about factors and their relationships, including the use of Hasse diagrams. These are used to elucidate structure, calculate degrees of freedom and allocate treatment subspaces to appropriate strata. Based on a one-term course the author has taught since 1989, the book is ideal for advanced undergraduate and beginning graduate courses. Examples, exercises and discussion questions are drawn from a wide range of real applications: from drug development, to agriculture, to manufacturing.

'Rosemary Bailey has made wonderful contributions to applications and theory of the design of statistical experiments. She has woven these and her love of the history and philosophy of the subject into an accessible textbook. A terrific achievement.'

Persi Diaconis - Stanford University

'This is ‘the beauty and joy of experimental design’: a mathematically beautiful and eloquently written treatise by the master!'

Geert Molenberghs - Universiteit Hasselt, Belgium

‘A definitive treatment. Rothamsted experimental design lucidly expounded from a modern viewpoint.’

Terry Speed - The Walter & Eliza Hall Institute of Medical Research, Australia

'This excellent book clearly presents elegant, general and simplifying theory, combining valuable practical advice with a large number of real examples. It treats the design of comparative experiments with a unique approach not seen in other books …A must-read for anyone designing experiments or wanting to learn about the design of experiments.'

Ching-Shui Cheng - University of California, Berkeley

  • Aa Reduce text
  • Aa Enlarge text

Refine List

Actions for selected content:.

  • View selected items
  • Save to my bookmarks
  • Export citations
  • Download PDF (zip)
  • Save to Kindle
  • Save to Dropbox
  • Save to Google Drive

Save content to

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to .

To save content items to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

Save Search

You can save your searches here and later view and run them again in "My saved searches".

Frontmatter pp i-iv

  • Get access Check if you have access via personal or institutional login Log in Register

Contents pp v-x

Preface pp xi-xiv, 1 - forward look pp 1-18, 2 - unstructured experiments pp 19-42, 3 - simple treatment structure pp 43-52, 4 - blocking pp 53-74, 5 - factorial treatment structure pp 75-104, 6 - row–column designs pp 105-116, 7 - experiments on people and animals pp 117-130, 8 - small units inside large units pp 131-156, 9 - more about latin squares pp 157-168, 10 - the calculus of factors pp 169-218, 11 - incomplete-block designs pp 219-240, 12 - factorial designs in incomplete blocks pp 241-258, 13 - fractional factorial designs pp 259-270, 14 - backward look pp 271-290, exercises pp 291-312, sources of examples, questions and exercises pp 313-318, further reading pp 319-320, references pp 321-326, index pp 327-330, altmetric attention score, full text views.

Full text views reflects the number of PDF downloads, PDFs sent to Google Drive, Dropbox and Kindle and HTML full text views for chapters in this book.

Book summary page views

Book summary views reflect the number of visits to the book and chapter landing pages.

* Views captured on Cambridge Core between #date#. This data will be updated every 24 hours.

Usage data cannot currently be displayed.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.9(4); Oct-Dec 2018

Study designs: Part 1 – An overview and classification

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Mumbai, Maharashtra, India

Rakesh Aggarwal

1 Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, Uttar Pradesh, India

There are several types of research study designs, each with its inherent strengths and flaws. The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on “study designs,” we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

INTRODUCTION

Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem.

Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the nature of question, the goal of research, and the availability of resources. Since the design of a study can affect the validity of its results, it is important to understand the different types of study designs and their strengths and limitations.

There are some terms that are used frequently while classifying study designs which are described in the following sections.

A variable represents a measurable attribute that varies across study units, for example, individual participants in a study, or at times even when measured in an individual person over time. Some examples of variables include age, sex, weight, height, health status, alive/dead, diseased/healthy, annual income, smoking yes/no, and treated/untreated.

Exposure (or intervention) and outcome variables

A large proportion of research studies assess the relationship between two variables. Here, the question is whether one variable is associated with or responsible for change in the value of the other variable. Exposure (or intervention) refers to the risk factor whose effect is being studied. It is also referred to as the independent or the predictor variable. The outcome (or predicted or dependent) variable develops as a consequence of the exposure (or intervention). Typically, the term “exposure” is used when the “causative” variable is naturally determined (as in observational studies – examples include age, sex, smoking, and educational status), and the term “intervention” is preferred where the researcher assigns some or all participants to receive a particular treatment for the purpose of the study (experimental studies – e.g., administration of a drug). If a drug had been started in some individuals but not in the others, before the study started, this counts as exposure, and not as intervention – since the drug was not started specifically for the study.

Observational versus interventional (or experimental) studies

Observational studies are those where the researcher is documenting a naturally occurring relationship between the exposure and the outcome that he/she is studying. The researcher does not do any active intervention in any individual, and the exposure has already been decided naturally or by some other factor. For example, looking at the incidence of lung cancer in smokers versus nonsmokers, or comparing the antenatal dietary habits of mothers with normal and low-birth babies. In these studies, the investigator did not play any role in determining the smoking or dietary habit in individuals.

For an exposure to determine the outcome, it must precede the latter. Any variable that occurs simultaneously with or following the outcome cannot be causative, and hence is not considered as an “exposure.”

Observational studies can be either descriptive (nonanalytical) or analytical (inferential) – this is discussed later in this article.

Interventional studies are experiments where the researcher actively performs an intervention in some or all members of a group of participants. This intervention could take many forms – for example, administration of a drug or vaccine, performance of a diagnostic or therapeutic procedure, and introduction of an educational tool. For example, a study could randomly assign persons to receive aspirin or placebo for a specific duration and assess the effect on the risk of developing cerebrovascular events.

Descriptive versus analytical studies

Descriptive (or nonanalytical) studies, as the name suggests, merely try to describe the data on one or more characteristics of a group of individuals. These do not try to answer questions or establish relationships between variables. Examples of descriptive studies include case reports, case series, and cross-sectional surveys (please note that cross-sectional surveys may be analytical studies as well – this will be discussed in the next article in this series). Examples of descriptive studies include a survey of dietary habits among pregnant women or a case series of patients with an unusual reaction to a drug.

Analytical studies attempt to test a hypothesis and establish causal relationships between variables. In these studies, the researcher assesses the effect of an exposure (or intervention) on an outcome. As described earlier, analytical studies can be observational (if the exposure is naturally determined) or interventional (if the researcher actively administers the intervention).

Directionality of study designs

Based on the direction of inquiry, study designs may be classified as forward-direction or backward-direction. In forward-direction studies, the researcher starts with determining the exposure to a risk factor and then assesses whether the outcome occurs at a future time point. This design is known as a cohort study. For example, a researcher can follow a group of smokers and a group of nonsmokers to determine the incidence of lung cancer in each. In backward-direction studies, the researcher begins by determining whether the outcome is present (cases vs. noncases [also called controls]) and then traces the presence of prior exposure to a risk factor. These are known as case–control studies. For example, a researcher identifies a group of normal-weight babies and a group of low-birth weight babies and then asks the mothers about their dietary habits during the index pregnancy.

Prospective versus retrospective study designs

The terms “prospective” and “retrospective” refer to the timing of the research in relation to the development of the outcome. In retrospective studies, the outcome of interest has already occurred (or not occurred – e.g., in controls) in each individual by the time s/he is enrolled, and the data are collected either from records or by asking participants to recall exposures. There is no follow-up of participants. By contrast, in prospective studies, the outcome (and sometimes even the exposure or intervention) has not occurred when the study starts and participants are followed up over a period of time to determine the occurrence of outcomes. Typically, most cohort studies are prospective studies (though there may be retrospective cohorts), whereas case–control studies are retrospective studies. An interventional study has to be, by definition, a prospective study since the investigator determines the exposure for each study participant and then follows them to observe outcomes.

The terms “prospective” versus “retrospective” studies can be confusing. Let us think of an investigator who starts a case–control study. To him/her, the process of enrolling cases and controls over a period of several months appears prospective. Hence, the use of these terms is best avoided. Or, at the very least, one must be clear that the terms relate to work flow for each individual study participant, and not to the study as a whole.

Classification of study designs

Figure 1 depicts a simple classification of research study designs. The Centre for Evidence-based Medicine has put forward a useful three-point algorithm which can help determine the design of a research study from its methods section:[ 1 ]

An external file that holds a picture, illustration, etc.
Object name is PCR-9-184-g001.jpg

Classification of research study designs

  • Does the study describe the characteristics of a sample or does it attempt to analyze (or draw inferences about) the relationship between two variables? – If no, then it is a descriptive study, and if yes, it is an analytical (inferential) study
  • If analytical, did the investigator determine the exposure? – If no, it is an observational study, and if yes, it is an experimental study
  • If observational, when was the outcome determined? – at the start of the study (case–control study), at the end of a period of follow-up (cohort study), or simultaneously (cross sectional).

In the next few pieces in the series, we will discuss various study designs in greater detail.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Type of research What’s the difference? What to consider
Basic vs. applied Basic research aims to , while applied research aims to . Do you want to expand scientific understanding or solve a practical problem?
vs. Exploratory research aims to , while explanatory research aims to . How much is already known about your research problem? Are you conducting initial research on a newly-identified issue, or seeking precise conclusions about an established issue?
aims to , while aims to . Is there already some theory on your research problem that you can use to develop , or do you want to propose new theories based on your findings?

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

comparative experimental research design

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Type of research What’s the difference? What to consider
Primary research vs secondary research Primary data is (e.g., through or ), while secondary data (e.g., in government or scientific publications). How much data is already available on your topic? Do you want to collect original data or analyze existing data (e.g., through a )?
, while . Is your research more concerned with measuring something or interpreting something? You can also create a research design that has elements of both.
vs Descriptive research gathers data , while experimental research . Do you want to identify characteristics, patterns and or test causal relationships between ?

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Type of research What’s the difference? What to consider
allows you to , while allows you to draw conclusions . Do you want to produce  knowledge that applies to many contexts or detailed knowledge about a specific context (e.g. in a )?
vs Cross-sectional studies , while longitudinal studies . Is your research question focused on understanding the current situation or tracking changes over time?
Field research vs laboratory research Field research takes place in , while laboratory research takes place in . Do you want to find out how something occurs in the real world or draw firm conclusions about cause and effect? Laboratory experiments have higher but lower .
Fixed design vs flexible design In a fixed research design the subjects, timescale and location are begins, while in a flexible design these aspects may . Do you want to test hypotheses and establish generalizable facts, or explore concepts and develop understanding? For measuring, testing and making generalizations, a fixed research design has higher .

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved September 22, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, what is your plagiarism score.

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

comparative experimental research design

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted - Enago

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

comparative experimental research design

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

comparative experimental research design

Which among these features would you prefer the most in a peer review assistant?

Characteristics of a Comparative Research Design

Hannah richardson, 28 jun 2018.

Characteristics of a Comparative Research Design

Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups. Comparative studies can be used to increase understanding between cultures and societies and create a foundation for compromise and collaboration. These studies contain both quantitative and qualitative research methods.

Explore this article

  • Comparative Quantitative
  • Comparative Qualitative
  • When to Use It
  • When Not to Use It

1 Comparative Quantitative

Quantitative, or experimental, research is characterized by the manipulation of an independent variable to measure and explain its influence on a dependent variable. Because comparative research studies analyze two different groups -- which may have very different social contexts -- it is difficult to establish the parameters of research. Such studies might seek to compare, for example, large amounts of demographic or employment data from different nations that define or measure relevant research elements differently.

However, the methods for statistical analysis of data inherent in quantitative research are still helpful in establishing correlations in comparative studies. Also, the need for a specific research question in quantitative research helps comparative researchers narrow down and establish a more specific comparative research question.

2 Comparative Qualitative

Qualitative, or nonexperimental, is characterized by observation and recording outcomes without manipulation. In comparative research, data are collected primarily by observation, and the goal is to determine similarities and differences that are related to the particular situation or environment of the two groups. These similarities and differences are identified through qualitative observation methods. Additionally, some researchers have favored designing comparative studies around a variety of case studies in which individuals are observed and behaviors are recorded. The results of each case are then compared across people groups.

3 When to Use It

Comparative research studies should be used when comparing two people groups, often cross-nationally. These studies analyze the similarities and differences between these two groups in an attempt to better understand both groups. Comparisons lead to new insights and better understanding of all participants involved. These studies also require collaboration, strong teams, advanced technologies and access to international databases, making them more expensive. Use comparative research design when the necessary funding and resources are available.

4 When Not to Use It

Do not use comparative research design with little funding, limited access to necessary technology and few team members. Because of the larger scale of these studies, they should be conducted only if adequate population samples are available. Additionally, data within these studies require extensive measurement analysis; if the necessary organizational and technological resources are not available, a comparative study should not be used. Do not use a comparative design if data are not able to be measured accurately and analyzed with fidelity and validity.

  • 1 San Jose State University: Selected Issues in Study Design
  • 2 University of Surrey: Social Research Update 13: Comparative Research Methods

About the Author

Hannah Richardson has a Master's degree in Special Education from Vanderbilt University and a Bacheor of Arts in English. She has been a writer since 2004 and wrote regularly for the sports and features sections of "The Technician" newspaper, as well as "Coastwach" magazine. Richardson also served as the co-editor-in-chief of "Windhover," an award-winning literary and arts magazine. She is currently teaching at a middle school.

Related Articles

Research Study Design Types

Research Study Design Types

Correlational Methods vs. Experimental Methods

Correlational Methods vs. Experimental Methods

Different Types of Methodologies

Different Types of Methodologies

Quasi-Experiment Advantages & Disadvantages

Quasi-Experiment Advantages & Disadvantages

What Are the Advantages & Disadvantages of Non-Experimental Design?

What Are the Advantages & Disadvantages of Non-Experimental...

Independent vs. Dependent Variables in Sociology

Independent vs. Dependent Variables in Sociology

Methods of Research Design

Methods of Research Design

Qualitative Research Pros & Cons

Qualitative Research Pros & Cons

How to Form a Theoretical Study of a Dissertation

How to Form a Theoretical Study of a Dissertation

What Is the Difference Between Internal & External Validity of Research Study Design?

What Is the Difference Between Internal & External...

Difference Between Conceptual & Theoretical Framework

Difference Between Conceptual & Theoretical Framework

The Advantages of Exploratory Research Design

The Advantages of Exploratory Research Design

What Is Quantitative Research?

What Is Quantitative Research?

What is a Dissertation?

What is a Dissertation?

What Are the Advantages & Disadvantages of Correlation Research?

What Are the Advantages & Disadvantages of Correlation...

What Is the Meaning of the Descriptive Method in Research?

What Is the Meaning of the Descriptive Method in Research?

How to Use Qualitative Research Methods in a Case Study Research Project

How to Use Qualitative Research Methods in a Case Study...

How to Tabulate Survey Results

How to Tabulate Survey Results

How to Cross Validate Qualitative Research Results

How to Cross Validate Qualitative Research Results

Types of Descriptive Research Methods

Types of Descriptive Research Methods

Regardless of how old we are, we never stop learning. Classroom is the educational resource for people of all ages. Whether you’re studying times tables or applying to college, Classroom has the answers.

  • Accessibility
  • Terms of Use
  • Privacy Policy
  • Copyright Policy
  • Manage Preferences

© 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Based on the Word Net lexical database for the English Language. See disclaimer .

comparative experimental research design

Causal Comparative Research: Methods And Examples

Ritu was in charge of marketing a new protein drink about to be launched. The client wanted a causal-comparative study…

Causal Comparative Research

Ritu was in charge of marketing a new protein drink about to be launched. The client wanted a causal-comparative study highlighting the drink’s benefits. They demanded that comparative analysis be made the main campaign design strategy. After carefully analyzing the project requirements, Ritu decided to follow a causal-comparative research design. She realized that causal-comparative research emphasizing physical development in different groups of people would lay a good foundation to establish the product.

What Is Causal Comparative Research?

Examples of causal comparative research variables.

Causal-comparative research is a method used to identify the cause–effect relationship between a dependent and independent variable. This relationship is usually a suggested relationship because we can’t control an independent variable completely. Unlike correlation research, this doesn’t rely on relationships. In a causal-comparative research design, the researcher compares two groups to find out whether the independent variable affected the outcome or the dependent variable.

A causal-comparative method determines whether one variable has a direct influence on the other and why. It identifies the causes of certain occurrences (or non-occurrences). It makes a study descriptive rather than experimental by scrutinizing the relationships among different variables in which the independent variable has already occurred. Variables can’t be manipulated sometimes, but a link between dependent and independent variables is established and the implications of possible causes are used to draw conclusions.

In a causal-comparative design, researchers study cause and effect in retrospect and determine consequences or causes of differences already existing among or between groups of people.

Let’s look at some characteristics of causal-comparative research:

  • This method tries to identify cause and effect relationships.
  • Two or more groups are included as variables.
  • Individuals aren’t selected randomly.
  • Independent variables can’t be manipulated.
  • It helps save time and money.

The main purpose of a causal-comparative study is to explore effects, consequences and causes. There are two types of causal-comparative research design. They are:

Retrospective Causal Comparative Research

For this type of research, a researcher has to investigate a particular question after the effects have occurred. They attempt to determine whether or not a variable influences another variable.

Prospective Causal Comparative Research

The researcher initiates a study, beginning with the causes and determined to analyze the effects of a given condition. This is not as common as retrospective causal-comparative research.

Usually, it’s easier to compare a variable with the known than the unknown.

Researchers use causal-comparative research to achieve research goals by comparing two variables that represent two groups. This data can include differences in opportunities, privileges exclusive to certain groups or developments with respect to gender, race, nationality or ability.

For example, to find out the difference in wages between men and women, researchers have to make a comparative study of wages earned by both genders across various professions, hierarchies and locations. None of the variables can be influenced and cause-effect relationship has to be established with a persuasive logical argument. Some common variables investigated in this type of research are:

  • Achievement and other ability variables
  • Family-related variables
  • Organismic variables such as age, sex and ethnicity
  • Variables related to schools
  • Personality variables

While raw test scores, assessments and other measures (such as grade point averages) are used as data in this research, sources, standardized tests, structured interviews and surveys are popular research tools.

However, there are drawbacks of causal-comparative research too, such as its inability to manipulate or control an independent variable and the lack of randomization. Subject-selection bias always remains a possibility and poses a threat to the internal validity of a study. Researchers can control it with statistical matching or by creating identical subgroups. Executives have to look out for loss of subjects, location influences, poor attitude of subjects and testing threats to produce a valid research study.

Harappa’s Thinking Critically program is for managers who want to learn how to think effectively before making critical decisions. Learn how leaders articulate the reasons behind and implications of their decisions. Become a growth-driven manager looking to select the right strategies to outperform targets. It’s packed with problem-solving and effective-thinking tools that are essential for skill development. What more? It offers live learning support and the opportunity to progress at your own pace. Ask for your free demo today!

Explore Harappa Diaries to learn more about topics such as Objectives Of Research Methodology , Types Of Thinking , What Is Visualisation and Effective Learning Methods to upgrade your knowledge and skills.

Thriversitybannersidenav

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

comparative experimental research design

Home Market Research Research Tools and Apps

Causal Comparative Research: Definition, Types & Benefits

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Within the field of research, there are multiple methodologies and ways to find answers to your needs, in this article we will address everything you need to know about Causal Comparative Research, a methodology with many advantages and applications.

What Is Causal Comparative Research?

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

When you think of Casual Comparative Research, it will almost always consist of the following:

  • A method or set of methods to identify cause/effect relationships
  • A set of individuals (or entities) that are NOT selected randomly – they were intended to participate in this specific study
  • Variables are represented in two or more groups (cannot be less than two, otherwise there is no differentiation between them)
  • Non-manipulated independent variables – *typically, it’s a suggested relationship (since we can’t control the independent variable completely)

Types of Casual Comparative Research

Casual Comparative Research is broken down into two types:

  • Retrospective Comparative Research
  • Prospective Comparative Research

Retrospective Comparative Research: Involves investigating a particular question…. after the effects have occurred. As an attempt to see if a specific variable does influence another variable.

Prospective Comparative Research: This type of Casual Comparative Research is characterized by being initiated by the researcher and starting with the causes and determined to analyze the effects of a given condition. This type of investigation is much less common than the Retrospective type of investigation.

LEARN ABOUT: Quasi-experimental Research

Causal Comparative Research vs Correlation Research

The universal rule of statistics… correlation is NOT causation! 

Casual Comparative Research does not rely on relationships. Instead, they’re comparing two groups to find out whether the independent variable affected the outcome of the dependent variable

When running a Causal Comparative Research, none of the variables can be influenced, and a cause-effect relationship has to be established with a persuasive, logical argument; otherwise, it’s a correlation.

Another significant difference between both methodologies is their analysis of the data collected. In the case of Causal Comparative Research, the results are usually analyzed using cross-break tables and comparing the averages obtained. At the same time, in Causal Comparative Research, Correlation Analysis typically uses scatter charts and correlation coefficients.

Advantages and Disadvantages of Causal Comparative Research

Like any research methodology, causal comparative research has a specific use and limitations to consider when considering them in your next project. Below we list some of the main advantages and disadvantages.

  • It is more efficient since it allows you to save human and economic resources and to do it relatively quickly.
  • Identifying causes of certain occurrences (or non-occurrences)
  • Thus, descriptive analysis rather than experimental

Disadvantages

  • You’re not fully able to manipulate/control an independent variable as well as the lack of randomization
  • Like other methodologies, it tends to be prone to some research bias , the most common type of research is subject- selection bias , so special care must be taken to avoid it so as not to compromise the validity of this type of research.
  • The loss of subjects/location influences / poor attitude of subjects/testing threats….are always a possibility

Finally, it is important to remember that the results of this type of causal research should be interpreted with caution since a common mistake is to think that although there is a relationship between the two variables analyzed, this does not necessarily guarantee that the variable influences or is the main factor to influence in the second variable.

LEARN ABOUT: ANOVA testing

QuestionPro can be your ally in your next Causal Comparative Research

QuestionPro is one of the platforms most used by the world’s leading research agencies, thanks to its diverse functions and versatility when collecting and analyzing data.

With QuestionPro you will not only be able to collect the necessary data to carry out your causal comparative research, you will also have access to a series of advanced reports and analyses to obtain valuable insights for your research project.

We invite you to learn more about our Research Suite, schedule a free demo of our main features today, and clarify all your doubts about our solutions.

LEARN MORE         SIGN UP FREE

Author : John Oppenhimer

MORE LIKE THIS

Top 5 change management models to transform your organization.

Sep 20, 2024

customer reviews

Customer Reviews: How to Ask and Easy Ways to Get Them

Sep 19, 2024

Raw Data

Raw Data: What it is + How to Process It

Sep 18, 2024

comparative experimental research design

QuestionPro: Leading the Charge in Customer Journey Management and Voice of the Customer Platforms

Sep 17, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

IMAGES

  1. PPT

    comparative experimental research design

  2. Comparative Research Methodology

    comparative experimental research design

  3. What is Comparative Research? Definition, Types, Uses

    comparative experimental research design

  4. Main structures of comparative research with longitudinal designs

    comparative experimental research design

  5. PPT

    comparative experimental research design

  6. PPT

    comparative experimental research design

VIDEO

  1. Casual Comparative Research Design: Understanding its Nature and Concepts

  2. EXPERIMENTAL Research Design & Comparative Methods. #researchmethods #sociology

  3. non experimental research design with examples and characteristics

  4. Types of Research Design

  5. Exploring Mixed Methods Research Designs: Types and Applications

  6. Needs of Experimental Design

COMMENTS

  1. Designing comparative experiments

    Figure 1: Design and reporting of a single-factor experiment with three levels using a two-sample t-test. (a) Two treated samples (A and B) with n = 17 are compared to a control (C) with n = 17 ...

  2. Chapter 10 Methods for Comparative Studies

    For comparative studies, the design options are experimental versus observational and prospective versus retro­­spective. The quality of eHealth comparative studies depends on such aspects of methodological design as the choice of variables, sample size, sources of bias, confounders, and adherence to quality and reporting guidelines.

  3. Comparative Method/Quasi-Experimental

    Comparative method or quasi-experimental---a method used to describe similarities and differences in variables in two or more groups in a natural setting, that is, it resembles an experiment as it uses manipulation but lacks random assignment of individual subjects. Instead it uses existing groups.

  4. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time. ... Such quasi-experimental research designs often forbid a ...

  5. Comparative Designs

    A comparative design involves studying variation by comparing a limited number of cases without using statistical probability analyses. Such designs are particularly useful for knowledge development when we lack the conditions for control through variable-centred, quasi-experimental designs. Comparative designs often combine different research ...

  6. PDF Design of Comparative Experiments

    Design of Comparative Experiments ... to probability theory, operations research, optimization, and mathematical programming. The books contain clear presentations of new developments in the field and also of the state of ... 8.1 Experimental units bigger than observational units 131

  7. 15

    In contrast to the chapters on survey research, experimentation, or content analysis that described a distinct set of skills, in this chapter, a variety of comparative research techniques are discussed. What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data.

  8. PDF Chapter 9: Comparative Designs

    Learn how to use comparative designs to study variation by comparing a limited number of cases without statistical methods. Explore the different strategies for case-centred and variable-centred comparisons, and how they can be combined with case analysis.

  9. Design of Comparative Experiments

    Food Research International, Vol. 43, Issue. 2, p. 526. ... 'This is 'the beauty and joy of experimental design': a mathematically beautiful and eloquently written treatise by the master!' ... It treats the design of comparative experiments with a unique approach not seen in other books …A must-read for anyone designing experiments or ...

  10. Comparative Research Methods

    Research goals. Comparative communication research is a combination of substance (specific objects of investigation studied in diferent macro-level contexts) and method (identification of diferences and similarities following established rules and using equivalent concepts).

  11. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  12. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    a bit from book to book. First are experimental designs with an in tervention, control group, and randomization of participants into groups. Next are quasi-experimental designs with an in tervention but no randomization.Descriptive designs d o not have an intervention or treatment and are considered nonexperimental.

  13. Comparative Single-Subject Research: Description of Designs and

    This article describes four single-subject research designs used to make comparisons between two or more interventions (independent variables). ... The ways in which these research designs address the problems inherent in comparative research are discussed, as are issues related to conducting comparative research. ... Single case experimental ...

  14. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  15. Types of Research Designs Compared

    Learn how to choose the right type of research design for your project based on the research aims, data, sampling, timescale, and location. Compare different types of research designs with examples and tips.

  16. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  17. Experimental Research Designs: Types, Examples & Advantages

    Pre-experimental research is of three types —. One-shot Case Study Research Design. One-group Pretest-posttest Research Design. Static-group Comparison. 2. True Experimental Research Design. A true experimental research design relies on statistical analysis to prove or disprove a researcher's hypothesis.

  18. Sage Research Methods

    Show page numbers. A causal-comparative design is a research design that seeks to find relationships between independent and dependent variables after an action or event has already occurred. The researcher's goal is to determine whether the independent variable affected the outcome, or dependent variable, by comparing two or more groups of ...

  19. Quantitative Research with Nonexperimental Designs

    There are two main types of nonexperimental research designs: comparative design and correlational design. In comparative research, the researcher examines the differences between two or more groups on the phenomenon that is being studied. For example, studying gender difference in learning mathematics is a comparative research.

  20. Demystifying the research process: understanding a descriptive

    • Descriptive Comparatives: is a non-experimental, quantitative research design. It is a casual comparative research and pre-experimental research (Cantrell, 2011) • Experiments: testing ...

  21. Types of Quantitative Research Methods and Designs

    This is thanks in large part to your strategic research design. As you prepare for your quantitative dissertation research, you'll need to think about structuring your research design. There are several types of quantitative research designs, such as the experimental, comparative or predictive correlational designs.

  22. 5: Experimental Design

    Experimental design is a discipline within statistics concerned with the analysis and design of experiments. Design is intended to help research create experiments such that cause and effect can be established from tests of the hypothesis. We introduced elements of experimental design in Chapter 2.4. Here, we expand our discussion of ...

  23. Characteristics of a Comparative Research Design

    Learn what comparative research design is, how it compares two groups across different contexts, and what methods and challenges it involves. Find out when to use and when not to use this design for your study.

  24. Causal Comparative Research: Methods And Examples

    In a causal-comparative research design, the researcher compares two groups to find out whether the independent variable affected the outcome or the dependent variable. A causal-comparative method determines whether one variable has a direct influence on the other and why. It identifies the causes of certain occurrences (or non-occurrences).

  25. Causal Comparative Research: Definition, Types & Benefits

    Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables. Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.