Experimental Design: Types, Examples & Methods

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

science education resource

  • Activities, Experiments, Online Games, Visual Aids
  • Activities, Experiments, and Investigations
  • Experimental Design and the Scientific Method

Experimental Design - Independent, Dependent, and Controlled Variables

To view these resources with no ads, please login or subscribe to help support our content development. school subscriptions can access more than 175 downloadable unit bundles in our store for free (a value of $1,500). district subscriptions provide huge group discounts for their schools. email for a quote: [email protected] ..

Scientific experiments are meant to show cause and effect of a phenomena (relationships in nature).  The “ variables ” are any factor, trait, or condition that can be changed in the experiment and that can have an effect on the outcome of the experiment.

An experiment can have three kinds of variables: i ndependent, dependent, and controlled .

  • The independent variable is one single factor that is changed by the scientist followed by observation to watch for changes. It is important that there is just one independent variable, so that results are not confusing.
  • The dependent variable is the factor that changes as a result of the change to the independent variable.
  • The controlled variables (or constant variables) are factors that the scientist wants to remain constant if the experiment is to show accurate results. To be able to measure results, each of the variables must be able to be measured.

For example, let’s design an experiment with two plants sitting in the sun side by side. The controlled variables (or constants) are that at the beginning of the experiment, the plants are the same size, get the same amount of sunlight, experience the same ambient temperature and are in the same amount and consistency of soil (the weight of the soil and container should be measured before the plants are added). The independent variable is that one plant is getting watered (1 cup of water) every day and one plant is getting watered (1 cup of water) once a week. The dependent variables are the changes in the two plants that the scientist observes over time.

Experimental Design - Independent, Dependent, and Controlled Variables

Can you describe the dependent variable that may result from this experiment? After four weeks, the dependent variable may be that one plant is taller, heavier and more developed than the other. These results can be recorded and graphed by measuring and comparing both plants’ height, weight (removing the weight of the soil and container recorded beforehand) and a comparison of observable foliage.

Using What You Learned: Design another experiment using the two plants, but change the independent variable. Can you describe the dependent variable that may result from this new experiment?

Think of another simple experiment and name the independent, dependent, and controlled variables. Use the graphic organizer included in the PDF below to organize your experiment's variables.

Please Login or Subscribe to access downloadable content.

Citing Research References

When you research information you must cite the reference. Citing for websites is different from citing from books, magazines and periodicals. The style of citing shown here is from the MLA Style Citations (Modern Language Association).

When citing a WEBSITE the general format is as follows. Author Last Name, First Name(s). "Title: Subtitle of Part of Web Page, if appropriate." Title: Subtitle: Section of Page if appropriate. Sponsoring/Publishing Agency, If Given. Additional significant descriptive information. Date of Electronic Publication or other Date, such as Last Updated. Day Month Year of access < URL >.

Here is an example of citing this page:

Amsel, Sheri. "Experimental Design - Independent, Dependent, and Controlled Variables" Exploring Nature Educational Resource ©2005-2024. March 25, 2024 < http://www.exploringnature.org/db/view/Experimental-Design-Independent-Dependent-and-Controlled-Variables >

Exploringnature.org has more than 2,000 illustrated animals. Read about them, color them, label them, learn to draw them.

Exploringnature.org has more than 2,000 illustrated animals. Read about them, color them, label them, learn to draw them.

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Qualitative Research Methods

Qualitative Research Methods

Questionnaire

Questionnaire – Definition, Types, and Examples

Explanatory Research

Explanatory Research – Types, Methods, Guide

Correlational Research Design

Correlational Research – Methods, Types and...

Phenomenology

Phenomenology – Methods, Examples and Guide

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Experimental Design: Definition and Types

By Jim Frost 3 Comments

What is Experimental Design?

An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions.

An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental design as the design of experiments (DOE). Both terms are synonymous.

Scientist who developed an experimental design for her research.

Ultimately, the design of experiments helps ensure that your procedures and data will evaluate your research question effectively. Without an experimental design, you might waste your efforts in a process that, for many potential reasons, can’t answer your research question. In short, it helps you trust your results.

Learn more about Independent and Dependent Variables .

Design of Experiments: Goals & Settings

Experiments occur in many settings, ranging from psychology, social sciences, medicine, physics, engineering, and industrial and service sectors. Typically, experimental goals are to discover a previously unknown effect , confirm a known effect, or test a hypothesis.

Effects represent causal relationships between variables. For example, in a medical experiment, does the new medicine cause an improvement in health outcomes? If so, the medicine has a causal effect on the outcome.

An experimental design’s focus depends on the subject area and can include the following goals:

  • Understanding the relationships between variables.
  • Identifying the variables that have the largest impact on the outcomes.
  • Finding the input variable settings that produce an optimal result.

For example, psychologists have conducted experiments to understand how conformity affects decision-making. Sociologists have performed experiments to determine whether ethnicity affects the public reaction to staged bike thefts. These experiments map out the causal relationships between variables, and their primary goal is to understand the role of various factors.

Conversely, in a manufacturing environment, the researchers might use an experimental design to find the factors that most effectively improve their product’s strength, identify the optimal manufacturing settings, and do all that while accounting for various constraints. In short, a manufacturer’s goal is often to use experiments to improve their products cost-effectively.

In a medical experiment, the goal might be to quantify the medicine’s effect and find the optimum dosage.

Developing an Experimental Design

Developing an experimental design involves planning that maximizes the potential to collect data that is both trustworthy and able to detect causal relationships. Specifically, these studies aim to see effects when they exist in the population the researchers are studying, preferentially favor causal effects, isolate each factor’s true effect from potential confounders, and produce conclusions that you can generalize to the real world.

To accomplish these goals, experimental designs carefully manage data validity and reliability , and internal and external experimental validity. When your experiment is valid and reliable, you can expect your procedures and data to produce trustworthy results.

An excellent experimental design involves the following:

  • Lots of preplanning.
  • Developing experimental treatments.
  • Determining how to assign subjects to treatment groups.

The remainder of this article focuses on how experimental designs incorporate these essential items to accomplish their research goals.

Learn more about Data Reliability vs. Validity and Internal and External Experimental Validity .

Preplanning, Defining, and Operationalizing for Design of Experiments

A literature review is crucial for the design of experiments.

This phase of the design of experiments helps you identify critical variables, know how to measure them while ensuring reliability and validity, and understand the relationships between them. The review can also help you find ways to reduce sources of variability, which increases your ability to detect treatment effects. Notably, the literature review allows you to learn how similar studies designed their experiments and the challenges they faced.

Operationalizing a study involves taking your research question, using the background information you gathered, and formulating an actionable plan.

This process should produce a specific and testable hypothesis using data that you can reasonably collect given the resources available to the experiment.

  • Null hypothesis : The jumping exercise intervention does not affect bone density.
  • Alternative hypothesis : The jumping exercise intervention affects bone density.

To learn more about this early phase, read Five Steps for Conducting Scientific Studies with Statistical Analyses .

Formulating Treatments in Experimental Designs

In an experimental design, treatments are variables that the researchers control. They are the primary independent variables of interest. Researchers administer the treatment to the subjects or items in the experiment and want to know whether it causes changes in the outcome.

As the name implies, a treatment can be medical in nature, such as a new medicine or vaccine. But it’s a general term that applies to other things such as training programs, manufacturing settings, teaching methods, and types of fertilizers. I helped run an experiment where the treatment was a jumping exercise intervention that we hoped would increase bone density. All these treatment examples are things that potentially influence a measurable outcome.

Even when you know your treatment generally, you must carefully consider the amount. How large of a dose? If you’re comparing three different temperatures in a manufacturing process, how far apart are they? For my bone mineral density study, we had to determine how frequently the exercise sessions would occur and how long each lasted.

How you define the treatments in the design of experiments can affect your findings and the generalizability of your results.

Assigning Subjects to Experimental Groups

A crucial decision for all experimental designs is determining how researchers assign subjects to the experimental conditions—the treatment and control groups. The control group is often, but not always, the lack of a treatment. It serves as a basis for comparison by showing outcomes for subjects who don’t receive a treatment. Learn more about Control Groups .

How your experimental design assigns subjects to the groups affects how confident you can be that the findings represent true causal effects rather than mere correlation caused by confounders. Indeed, the assignment method influences how you control for confounding variables. This is the difference between correlation and causation .

Imagine a study finds that vitamin consumption correlates with better health outcomes. As a researcher, you want to be able to say that vitamin consumption causes the improvements. However, with the wrong experimental design, you might only be able to say there is an association. A confounder, and not the vitamins, might actually cause the health benefits.

Let’s explore some of the ways to assign subjects in design of experiments.

Completely Randomized Designs

A completely randomized experimental design randomly assigns all subjects to the treatment and control groups. You simply take each participant and use a random process to determine their group assignment. You can flip coins, roll a die, or use a computer. Randomized experiments must be prospective studies because they need to be able to control group assignment.

Random assignment in the design of experiments helps ensure that the groups are roughly equivalent at the beginning of the study. This equivalence at the start increases your confidence that any differences you see at the end were caused by the treatments. The randomization tends to equalize confounders between the experimental groups and, thereby, cancels out their effects, leaving only the treatment effects.

For example, in a vitamin study, the researchers can randomly assign participants to either the control or vitamin group. Because the groups are approximately equal when the experiment starts, if the health outcomes are different at the end of the study, the researchers can be confident that the vitamins caused those improvements.

Statisticians consider randomized experimental designs to be the best for identifying causal relationships.

If you can’t randomly assign subjects but want to draw causal conclusions about an intervention, consider using a quasi-experimental design .

Learn more about Randomized Controlled Trials and Random Assignment in Experiments .

Randomized Block Designs

Nuisance factors are variables that can affect the outcome, but they are not the researcher’s primary interest. Unfortunately, they can hide or distort the treatment results. When experimenters know about specific nuisance factors, they can use a randomized block design to minimize their impact.

This experimental design takes subjects with a shared “nuisance” characteristic and groups them into blocks. The participants in each block are then randomly assigned to the experimental groups. This process allows the experiment to control for known nuisance factors.

Blocking in the design of experiments reduces the impact of nuisance factors on experimental error. The analysis assesses the effects of the treatment within each block, which removes the variability between blocks. The result is that blocked experimental designs can reduce the impact of nuisance variables, increasing the ability to detect treatment effects accurately.

Suppose you’re testing various teaching methods. Because grade level likely affects educational outcomes, you might use grade level as a blocking factor. To use a randomized block design for this scenario, divide the participants by grade level and then randomly assign the members of each grade level to the experimental groups.

A standard guideline for an experimental design is to “Block what you can, randomize what you cannot.” Use blocking for a few primary nuisance factors. Then use random assignment to distribute the unblocked nuisance factors equally between the experimental conditions.

You can also use covariates to control nuisance factors. Learn about Covariates: Definition and Uses .

Observational Studies

In some experimental designs, randomly assigning subjects to the experimental conditions is impossible or unethical. The researchers simply can’t assign participants to the experimental groups. However, they can observe them in their natural groupings, measure the essential variables, and look for correlations. These observational studies are also known as quasi-experimental designs. Retrospective studies must be observational in nature because they look back at past events.

Imagine you’re studying the effects of depression on an activity. Clearly, you can’t randomly assign participants to the depression and control groups. But you can observe participants with and without depression and see how their task performance differs.

Observational studies let you perform research when you can’t control the treatment. However, quasi-experimental designs increase the problem of confounding variables. For this design of experiments, correlation does not necessarily imply causation. While special procedures can help control confounders in an observational study, you’re ultimately less confident that the results represent causal findings.

Learn more about Observational Studies .

For a good comparison, learn about the differences and tradeoffs between Observational Studies and Randomized Experiments .

Between-Subjects vs. Within-Subjects Experimental Designs

When you think of the design of experiments, you probably picture a treatment and control group. Researchers assign participants to only one of these groups, so each group contains entirely different subjects than the other groups. Analysts compare the groups at the end of the experiment. Statisticians refer to this method as a between-subjects, or independent measures, experimental design.

In a between-subjects design , you can have more than one treatment group, but each subject is exposed to only one condition, the control group or one of the treatment groups.

A potential downside to this approach is that differences between groups at the beginning can affect the results at the end. As you’ve read earlier, random assignment can reduce those differences, but it is imperfect. There will always be some variability between the groups.

In a  within-subjects experimental design , also known as repeated measures, subjects experience all treatment conditions and are measured for each. Each subject acts as their own control, which reduces variability and increases the statistical power to detect effects.

In this experimental design, you minimize pre-existing differences between the experimental conditions because they all contain the same subjects. However, the order of treatments can affect the results. Beware of practice and fatigue effects. Learn more about Repeated Measures Designs .

Assigned to one experimental condition Participates in all experimental conditions
Requires more subjects Fewer subjects
Differences between subjects in the groups can affect the results Uses same subjects in all conditions.
No order of treatment effects. Order of treatments can affect results.

Design of Experiments Examples

For example, a bone density study has three experimental groups—a control group, a stretching exercise group, and a jumping exercise group.

In a between-subjects experimental design, scientists randomly assign each participant to one of the three groups.

In a within-subjects design, all subjects experience the three conditions sequentially while the researchers measure bone density repeatedly. The procedure can switch the order of treatments for the participants to help reduce order effects.

Matched Pairs Experimental Design

A matched pairs experimental design is a between-subjects study that uses pairs of similar subjects. Researchers use this approach to reduce pre-existing differences between experimental groups. It’s yet another design of experiments method for reducing sources of variability.

Researchers identify variables likely to affect the outcome, such as demographics. When they pick a subject with a set of characteristics, they try to locate another participant with similar attributes to create a matched pair. Scientists randomly assign one member of a pair to the treatment group and the other to the control group.

On the plus side, this process creates two similar groups, and it doesn’t create treatment order effects. While matched pairs do not produce the perfectly matched groups of a within-subjects design (which uses the same subjects in all conditions), it aims to reduce variability between groups relative to a between-subjects study.

On the downside, finding matched pairs is very time-consuming. Additionally, if one member of a matched pair drops out, the other subject must leave the study too.

Learn more about Matched Pairs Design: Uses & Examples .

Another consideration is whether you’ll use a cross-sectional design (one point in time) or use a longitudinal study to track changes over time .

A case study is a research method that often serves as a precursor to a more rigorous experimental design by identifying research questions, variables, and hypotheses to test. Learn more about What is a Case Study? Definition & Examples .

In conclusion, the design of experiments is extremely sensitive to subject area concerns and the time and resources available to the researchers. Developing a suitable experimental design requires balancing a multitude of considerations. A successful design is necessary to obtain trustworthy answers to your research question and to have a reasonable chance of detecting treatment effects when they exist.

Share this:

experimental design 3 variables

Reader Interactions

' src=

March 23, 2024 at 2:35 pm

Dear Jim You wrote a superb document, I will use it in my Buistatistics course, along with your three books. Thank you very much! Miguel

' src=

March 23, 2024 at 5:43 pm

Thanks so much, Miguel! Glad this post was helpful and I trust the books will be as well.

' src=

April 10, 2023 at 4:36 am

What are the purpose and uses of experimental research design?

Comments and Questions Cancel reply

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Teach yourself statistics

Experimental Design for ANOVA

There is a close relationship between experimental design and statistical analysis. The way that an experiment is designed determines the types of analyses that can be appropriately conducted.

In this lesson, we review aspects of experimental design that a researcher must understand in order to properly interpret experimental data with analysis of variance.

What Is an Experiment?

An experiment is a procedure carried out to investigate cause-and-effect relationships. For example, the experimenter may manipulate one or more variables (independent variables) to assess the effect on another variable (the dependent variable).

Conclusions are reached on the basis of data. If the dependent variable is unaffected by changes in independent variables, we conclude that there is no causal relationship between the dependent variable and the independent variables. On the other hand, if the dependent variable is affected, we conclude that a causal relationship exists.

Experimenter Control

One of the features that distinguish a true experiment from other types of studies is experimenter control of the independent variable(s).

In a true experiment, an experimenter controls the level of the independent variable administered to each subject. For example, dosage level could be an independent variable in a true experiment; because an experimenter can manipulate the dosage administered to any subject.

What is a Quasi-Experiment?

A quasi-experiment is a study that lacks a critical feature of a true experiment. Quasi-experiments can provide insights into cause-and-effect relationships; but evidence from a quasi-experiment is not as persuasive as evidence from a true experiment. True experiments are the gold standard for causal analysis.

A study that used gender or IQ as an independent variable would be an example of a quasi-experiment, because the study lacks experimenter control over the independent variable; that is, an experimenter cannot manipulate the gender or IQ of a subject.

As we discuss experimental design in the context of a tutorial on analysis of variance, it is important to point out that experimenter control is a requirement for a true experiment; but it is not a requirement for analysis of variance. Analysis of variance can be used with true experiments and with quasi-experiments that lack only experimenter control over the independent variable.

Note: Henceforth in this tutorial, when we refer to an experiment, we will be referring to a true experiment or to a quasi-experiment that is almost a true experiment, in the sense that it lacks only experimenter control over the independent variable.

What Is Experimental Design?

The term experimental design refers to a plan for conducting an experiment in such a way that research results will be valid and easy to interpret. This plan includes three interrelated activities:

  • Write statistical hypotheses.
  • Collect data.
  • Analyze data.

Let's look in a little more detail at these three activities.

Statistical Hypotheses

A statistical hypothesis is an assumption about the value of a population parameter . There are two types of statistical hypotheses:

H 0: μ i = μ j

Here, μ i is the population mean for group i , and μ j is the population mean for group j . This hypothesis makes the assumption that population means in groups i and j are equal.

H 1: μ i ≠ μ j

This hypothesis makes the assumption that population means in groups i and j are not equal.

The null hypothesis and the alternative hypothesis are written to be mutually exclusive. If one is true, the other is not.

Experiments rely on sample data to test the null hypothesis. If experimental results, based on sample statistics , are consistent with the null hypothesis, the null hypothesis cannot be rejected; otherwise, the null hypothesis is rejected in favor of the alternative hypothesis.

Data Collection

The data collection phase of experimental design is all about methodology - how to run the experiment to produce valid, relevant statistics that can be used to test a null hypothesis.

Identify Variables

Every experiment exists to examine a cause-and-effect relationship. With respect to the relationship under investigation, an experimental design needs to account for three types of variables:

  • Dependent variable. The dependent variable is the outcome being measured, the effect in a cause-and-effect relationship.
  • Independent variables. An independent variable is a variable that is thought to be a possible cause in a cause-and-effect relationship.
  • Extraneous variables. An extraneous variable is any other variable that could affect the dependent variable, but is not explicitly included in the experiment.

Note: The independent variables that are explicitly included in an experiment are also called factors .

Define Treatment Groups

In an experiment, treatment groups are built around factors, each group defined by a unique combination of factor levels.

For example, suppose that a drug company wants to test a new cholesterol medication. The dependent variable is total cholesterol level. One independent variable is dosage. And, since some drugs affect men and women differently, the researchers include an second independent variable - gender.

This experiment has two factors - dosage and gender. The dosage factor has three levels (0 mg, 50 mg, and 100 mg), and the gender factor has two levels (male and female). Given this combination of factors and levels, we can define six unique treatment groups, as shown below:

Gender Dose
0 mg 50 mg 100 mg
Male Group 1 Group 2 Group 3
Female Group 4 Group 5 Group 6

Note: The experiment described above is an example of a quasi-experiment, because the gender factor cannot be manipulated by the experimenter.

Select Factor Levels

A factor in an experiment can be described by the way in which factor levels are chosen for inclusion in the experiment:

  • Fixed factor. The experiment includes all factor levels about which inferences are to be made.
  • Random factor. The experiment includes a random sample of levels from a much bigger population of factor levels.

Experiments can be described by the presence or absence of fixed or random factors:

  • Fixed-effects model. All of the factors in the experiment are fixed.
  • Random-effects model. All of the factors in the experiment are random.
  • Mixed model. At least one factor in the experiment is fixed, and at least one factor is random.

The use of fixed factors versus random factors has implications for how experimental results are interpreted. With a fixed factor, results apply only to factor levels that are explicitly included in the experiment. With a random factor, results apply to every factor level from the population.

For example, consider the blood pressure experiment described above. Suppose the experimenter only wanted to test the effect of three particular dosage levels - 0 mg, 50 mg, and 100 mg. He would include those dosage levels in the experiment, and any research conclusions would apply to only those particular dosage levels. This would be an example of a fixed-effects model.

On the other hand, suppose the experimenter wanted to test the effect of any dosage level. Since it is not practical to test every dosage level, the experimenter might choose three dosage levels at random from the population of possible dosage levels. Any research conclusions would apply not only to the selected dosage levels, but also to other dosage levels that were not included explicitly in the experiment. This would be an example of a random-effects model.

Select Experimental Units

The experimental unit is the entity that provides values for the dependent variable. Depending on the needs of the study, an experimental unit may be a person, animal, plant, product - anything. For example, in the cholesterol study described above, researchers measured cholesterol level (the dependent variable) of people; so the experimental units were people.

Note: When the experimental units are people, they are often referred to as subjects . Some researchers prefer the term participant , because subject has a connotation that the person is subservient.

If time and money were no object, you would include the entire population of experimental units in your experiment. In the real world, where there is never enough time or money, you will usually select a sample of experimental units from the population.

Ultimately, you want to use sample data to make inferences about population parameters. With that in mind, it is best practice to draw a random sample of experimental units from the population. This provides a defensible, statistical basis for generalizing from sample findings to the larger population.

Finally, it is important to consider sample size. The larger the sample, the greater the statistical power ; and the more confidence you can have in your results.

Assign Experimental Units to Treatments

Having selected a sample of experimental units, we need to assign each unit to one or more treatment groups. Here are two ways that you might assign experimental units to groups:

  • Independent groups design. Each experimental unit is randomly assigned to one, and only one, treatment group. This is also known as a between-subjects design .
  • Repeated measures design. Experimental units are assigned to more than one treatment group. This is also known as a within-subjects design .

Control for Extraneous Variables

Extraneous variables can mask effects of independent variables. Therefore, a good experimental design controls potential effects of extraneous variables. Here are a few strategies for controlling extraneous variables:

  • Randomization Assign subjects randomly to treatment groups. This tends to distribute effects of extraneous variables evenly across groups.
  • Repeated measures design. To control for individual differences between subjects (age, attitude, religion, etc.), assign each subject to multiple treatments. This strategy is called using subjects as their own control.
  • Counterbalancing. In repeated measures designs, randomize or reverse the order of treatments among subjects to control for order effects (e.g., fatigue, practice).

As we describe specific experimental designs in upcoming lessons, we will point out the strategies that are used with each design to control the confounding effects of extraneous variables.

Data Analysis

Researchers follow a formal process to determine whether to reject a null hypothesis, based on sample data. This process, called hypothesis testing, consists of five steps:

  • Formulate hypotheses. This involves stating the null and alternative hypotheses. Because the hypotheses are mutually exclusive, if one is true, the other must be false.
  • Choose the test statistic. This involves specifying the statistic that will be used to assess the validity of the null hypothesis. Typically, in analysis of variance studies, researchers compute a F ratio to test hypotheses.
  • Compute a P-value, based on sample data. Suppose the observed test statistic is equal to S . The P-value is the probability that the experiment would yield a test statistic as extreme as S , assuming the null hypothesis is true.
  • Choose a significance level. The significance level, denoted by α, is the probability of rejecting the null hypothesis when it is really true. Researchers often choose a significance level of 0.05 or 0.01.
  • Test the null hypothesis. If the P-value is smaller than the significance level, we reject the null hypothesis; if it is larger, we fail to reject.

A good experimental design includes a precise plan for data analysis. Before the first data point is collected, a researcher should know how experimental data will be processed to accept or reject the null hypotheses.

Test Your Understanding

In a well-designed experiment, which of the following statements is true?

I. The null hypothesis and the alternative hypothesis are mutually exclusive. II. The null hypothesis is subjected to statistical test. III. The alternative hypothesis is subjected to statistical test.

(A) I only (B) II only (C) III only (D) I and II (E) I and III

The correct answer is (D). The null hypothesis and the alternative hypothesis are mutually exclusive; if one is true, the other must be false. Only the null hypothesis is subjected to statistical test. When the null hypothesis is accepted, the alternative hypothesis is rejected. The alternative hypothesis is not tested explicitly.

In a true experiment, each subject is assigned to only one treatment group. What type of design is this?

(A) Independent groups design (B) Repeated measures design (C) Within-subjects design (D) None of the above (E) All of the above

The correct answer is (A). In an independent groups design, each experimental unit is assigned to one treatment group. In the other two designs, each experimental unit is assigned to more than one treatment group.

In a true experiment, which of the following does the experimenter control?

(A) How to manipulate independent variables. (B) How to assign subjects to treatment conditions. (C) How to control for extraneous variables. (D) None of the above (E) All of the above

The correct answer is (E). The experimenter chooses factors and factor levels for the experiment, assigns experimental units to treatment groups (often through a random process), and implements strategies (randomization, counterbalancing, etc.) to control the influence of extraneous variables.

Experimental Design

  • What is Experimental Design?
  • Validity in Experimental Design
  • Types of Design
  • Related Topics

1. What is Experimental Design?

Experimental design is a way to carefully plan experiments in advance so that your results are both objective and valid . The terms “Experimental Design” and “Design of Experiments” are used interchangeably and mean the same thing. However, the medical and social sciences tend to use the term “Experimental Design” while engineering, industrial and computer sciences favor the term “Design of experiments.”

Design of experiments involves:

  • The systematic collection of data
  • A focus on the design itself, rather than the results
  • Planning changes to independent (input) variables and the effect on dependent variables or response variables
  • Ensuring results are valid, easily interpreted, and definitive.

Ideally, your experimental design should:

  • Describe how participants are allocated to experimental groups. A common method is completely randomized design, where participants are assigned to groups at random. A second method is randomized block design, where participants are divided into homogeneous blocks (for example, age groups) before being randomly assigned to groups.
  • Minimize or eliminate confounding variables , which can offer alternative explanations for the experimental results.
  • Allow you to make inferences about the relationship between independent variables and dependent variables .
  • Reduce variability , to make it easier for you to find differences in treatment outcomes.

The most important principles 1 are:

  • Randomization : the assignment of study components by a completely random method, like simple random sampling . Randomization eliminates bias from the results
  • Replication : the experiment must be replicable by other researchers. This is usually achieved with the use of statistics like the standard error of the sample mean or confidence intervals .
  • Blocking: controlling sources of variation in the experimental results.

2. Variables in Design of Experiments

  • What is a Confounding Variable?
  • What is a Control Variable?
  • What is a Criterion Variable?
  • What are Endogenous Variables?
  • What is a Dependent Variable?
  • What is an Explanatory Variable?
  • What is an Intervening Variable?
  • What is a Manipulated Variable?
  • What is an Outcome Variable?

Back to Top

3. Validity in Design of Experiments

  • What is Concurrent Validity?
  • What is Construct Validity?
  • What is Consequential Validity?
  • What is Convergent Validity?
  • What is Criterion Validity?
  • What is Ecological validity?
  • What is External Validity?
  • What is Face Validity?
  • What is Internal Validity?
  • What is Predictive Validity?

4. Design of Experiments: Types

  • Adaptive designs.
  • Balanced Latin Square Design.
  • Balanced and Unbalanced Designs .
  • Between Subjects Design.
  • What are Case Studies?
  • What is a Case-Control Study?
  • What is a Cohort Study?
  • Completely Randomized Design.
  • Cross Lagged Panel Design .

Cross Sectional Research

  • Cross Sequential Design.
  • Definite Screening Design

Factorial Design.

  • Flexible Design.
  • Group sequential Design.
  • Longitudinal Research.

Matched-Pairs Design.

  • Parallel Design.
  • Observational Study .
  • Plackett-Burman Design.

Pretest-Posttest Design.

  • Prospective Study.

Quasi-Experimental Design.

Randomized block design., randomized controlled trial.

  • Repeated Measures Design .
  • Retrospective Study.
  • Split-Plot Design.
  • Strip-Plot Design .
  • Stepped Wedge Designs .
  • What is Survey Research?

Within subjects Design.

Between subjects design (independent measures)., what is between subjects design.

experimental design

In between subjects design, separate groups are created for each treatment. This type of experimental design is sometimes called independent measures design because each participant is assigned to only one treatment group.For example, you might be testing a new depression medication: one group receives the actual medication and the other receives a placebo . Participants can only be a member of one of the groups (either the treatment or placebo group). A new group is created for every treatment. For example, if you are testing two depression medications, you would have:

  • Group 1 (Medication 1).
  • Group 2 (Medication 2).
  • Group 3 (Placebo).

Advantages and Disadvantages of Between Subjects Design.

Advantages..

Between subjects design is one of the simplest types of experimental design setup. Other advantages include:

  • Multiple treatments and treatment levels can be tested at the same time.
  • This type of design can be completed quickly.

Disadvantages.

A major disadvantage in this type of experimental design is that as each participant is only being tested once, the addition of a new treatment requires the formation of another group. The design can become extremely complex if more than a few treatments are being tested. Other disadvantages include:

  • Differences in individuals (i.e. age, race, sex) may skew results and are almost impossible to control for in this experimental design.
  • Bias can be an issue unless you control for this factor using experimental blinds (either a single blind experiment–where the participant doesn’t know if they are getting a treatment or placebo–or a double blind, where neither the participant nor the researcher know).
  • Generalization issues means that you may not be able to extrapolate your results to a wider audience.
  • Environmental bias can be a problem with between subjects design. For example, let’s say you were giving one group of college students a standardized test at 8 a.m. and a second group the test at noon. Students who took the 8 a.m. test may perform poorly simply because they weren’t awake yet.

Back to Top.

Completely Randomized Experimental Design.

What is a completely randomized design.

A completely randomized design (CRD) is an experiment where the treatments are assigned at random. Every experimental unit has the same odds of receiving a particular treatment. This design is usually only used in lab experiments, where environmental factors are relatively easy to control for; it is rarely used out in the field, where environmental factors are usually impossible to control. When a CRD has two treatments, it is equivalent to a t-test .

A completely randomized design is generally implemented by:

  • Listing the treatment levels or treatment combinations.
  • Assigning each level/combination a random number.
  • Sorting the random numbers in order, to produce a random application order for treatments.

However, you could use any method that completely randomizes the treatments and experimental units, as long as you take care to ensure that:

  • The assignment is truly random.
  • You have accounted for extraneous variables .

Completely Randomized Design Example.

completely randomized design

Completely Randomized Design with Subsampling.

This subset of CRD is usually used when experimental units are limited. Subsampling might include several branches of a particular tree, or several samples from an individual plot. Back to Top.

What is a Factorial Design?

A factorial experimental design is used to investigate the effect of two or more independent variables on one dependent variable . For example, let’s say a researcher wanted to investigate components for increasing SAT Scores . The three components are:

  • SAT intensive class (yes or no).
  • SAT Prep book (yes or no).
  • Extra homework (yes or no).

The researcher plans to manipulate each of these independent variables. Each of the independent variables is called a factor , and each factor has two levels (yes or no). As this experiment has 3 factors with 2 levels, this is a 2 x 2 x 2 = 2 3 factorial design. An experiment with 3 factors and 3 levels would be a 3 3 factorial design and an experiment with 2 factors and 3 levels would be a 3 2 factorial design.

The vast majority of factorial experiments only have two levels. In some experiments where the number of level/factor combinations are unmanageable, the experiment can be split into parts (for example, by half), creating a fractional experimental design.

Null Outcome.

A null outcome is when the experiment’s outcome is the same regardless of how the levels and factors were combined. In the above example, that would mean no amount of SAT prep (book and class, class and extra homework etc.) could increase the scores of the students being studied.

Main Effect and Interaction Effect.

Two types of effects are considered when analyzing the results from a factorial experiment: main effect and interaction effect . The main effect is the effect of an independent variable (in this case, SAT prep class or SAT book or extra homework) on the dependent variable (SAT Scores). For a main effect to exist, you’d want to see a consistent trend across the different levels. For example, you might conclude that students who took the SAT prep class scored consistently higher than students who did not. An interaction effect occurs between factors. For example, one group of students who took the SAT class and used the SAT prep book showed an increase in SAT scores while the students who took the class but did not use the book didn’t show any increase. You could infer that there is an interaction between the SAT class and use of the SAT prep book. Back to Top.

What is Matched Pairs Design?

Matched pairs design is a special case of randomized block design. In this design, two treatments are assigned to homogeneous groups (blocks) of subjects. The goal is to maximize homogeneity in each pair. In other words, you want the pairs to be as similar as possible. The blocks are composed of matched pairs which are randomly assigned a treatment (commonly the drug or a placebo).

matched pairs design

Stacking in Matched Pairs Design.

You can think of matched pair design as a type of stacked randomized block design . With either design, your goal is to control for some variable that’s going to skew your results. In the above experiment, it isn’t just age that could account for differences in how people respond to drugs, several other confounding variables could also affect your experiment. The purpose of the blocks is to minimize a single source of variability (for example, differences due to age). When you create matched pairs, you’re creating blocks within blocks, enabling you to control for multiple sources of potential variability. You should construct your matched pairs carefully, as it’s often impossible to account for all variables without creating a huge and complex experiment. Therefore, you should create your blocks starting with which candidates are most likely to affect your results. Back to Top.

Observational Study

What is an observational study.

An observational study (sometimes called a natural experiment or a quasi-experiment) is where the researcher observes the study participants and measures variables without assigning any treatments. For example, let’s say you wanted to find out the effect of cognitive therapy for ADHD. In an experimental study, you would assign some patients cognitive therapy and other patients some other form of treatment (or no treatment at all). In an observational study you would find patients who are already undergoing the therapy , and some who are already participating in other therapies (or no therapy at all).

Ideally, treatments should be investigated experimentally with random assignment of treatments to participants. This random assignment means that measured and unmeasured characteristics are evenly divided over the groups. In other words, any differences between the groups would be due to chance. Any statistical tests you run on these types of studies would be reliable. However, it isn’t always ethical or feasible to run experimental studies, especially in medical studies involving life-threatening or potentially disabled studies. In these cases, observational studies are used.

Examples of Observational Studies

Selective Serotonin Reuptake Inhibitors and Violent Crime: A Cohort Study A study published in PLOS magazine studied the uncertain relationship between SSRIs (like Prozac and Paxil) and Violent Crime. The researchers “…extracted information on SSRIs prescribed in Sweden between 2006 and 2009 from the Swedish Prescribed Drug Register and information on convictions for violent crimes for the same period from the Swedish national crime register. They then compared the rate of violent crime while individuals were prescribed SSRIs with the rate of violent crime in the same individuals while not receiving medication.” The study findings found an increased association between SSRI use and violent crimes.

Cleaner Air Found to Add 5 Months to Life A Brigham Young University study examined the connected between air quality and life expectancy. The researchers looked at life expectancy data from 51 metropolitan areas and compared the figures to air quality improvements in each region from the 1980s to 1990s. After taking into account factors like smoking and socioeconomic status, the researchers found that an average of about five months life expectancy was attributed to clean air. The New York Times printed a summary of the results here .

Effects of Children of Occupational Exposures to Lead Researchers matched 33 children whose parents were exposed to lead at work with 33 children who were the same age and loved in the same neighborhood. Elevated levels of lead were found in the exposed children. This was attributed to levels of lead that the parents were exposed to at work, and poor hygiene practices of the parent (UPenn).

Longitudinal Research

Longitudinal research is an observational study of the same variables over time. Studies can last weeks, months or even decades. The term “longitudinal” is very broad, but generally means to collect data over more than one period, from the same participants(or very similar participants). According to sociologist Scott Menard, Ph.D. , the research should also involve some comparison of data among or between periods. However, the longitudinal research doesn’t necessarily have to be collected over time. Data could be collected at one point in time but include retrospective data. For example, a participant could be asked about their prior exercise habits up to and including the time of the study.

The purpose of Longitudinal Research is to:

  • Record patterns of change. For example, the development of emphysema over time.
  • Establish the direction and magnitude of causal relationships. For example, women who smoke are 12 times more likely to die of emphysema than non-smokers.

Cross sectional research involves collecting data at one specific point in time. You can interact with individuals directly, or you could study data in a database or other media. For example, you could study medical databases to see if illegal drug use results in heart disease. If you find a correlation ( what is correlation? ) between illegal drug use and heart disease, that would support the claim that illegal drug use may increase the risk of heart disease.

Cross sectional research is a descriptive study ; you only record what you find and you don’t manipulate variables like in traditional experiments. It is most often used to look at how often a phenomenon occurs in a population .

Advantages and Disadvantages of Cross Sectional Research

  • Can be very inexpensive if you already have a database (for example, medical history data in a hospital database).
  • Allows you to look at many factors at the same time, like age/weight/height/tobacco use/drug use.

Disadvantages

  • Can result in weak evidence, compared to cohort studies (which cost more and take longer).
  • Available data may not be suited to your research question. For example, if you wanted to know if sugar consumption leads to obesity, you are unlikely to find data on sugar consumption in a medical database.
  • Cross sectional research studies are usually unable to control for confounding variables . One reason for this is that it’s usually difficult to find people who are similar enough. For example, they might be decades apart in age or they might be born in very different geographic regions.

cross sectional research

Cross sectional research can give the “big picture” and can be a foundation to suggest other areas for more expensive research. For example, if the data suggests that there may be a relationship between sugar consumption and obesity, this could bolster an application for funding more research in this area.

Cross-Sectional vs Longitudinal Research

longitudinal research

Both cross-sectional and longitudinal research studies are observational. They are both conducted without any interference to the study participants. Cross-sectional research is conducted at a single point in time while a longitudinal study can be conducted over many years.

For example, let’s say researchers wanted to find out if older adults who gardened had lower blood pressure than older adults who did not garden. In a cross-sectional study, the researchers might select 100 people from different backgrounds, ask them about their gardening habits and measure their blood pressure. The study would be conducted at approximately the same period of time (say, over a week). In a longitudinal study, the questions and measurements would be the same. But the researchers would follow the participants over time. They may record the answers and measurements every year.

One major advantage of longitudinal research is that over time, researchers are more able to provide a cause-and-effect relationship. With the blood pressure example above, cross-sectional research wouldn’t give researchers information about what blood pressure readings were before the study. For example, participants may have had lower blood pressure before gardening. Longitudinal research can detect changes over time, both at the group and at the individual level.

Types of Longitudinal Design

Longitudinal Panel Design is the “traditional” type of longitudinal design, where the same data is collected from the same participants over a period of time. Repeated cross-sectional studies can be classified as longitudinal. Other types are:

  • Total population design, where the total population is surveyed in each study period.
  • Revolving panel design, where new participants are selected each period.

What is Pretest Posttest Design?

pretest posttest design

A pretest posttest design is an experiment where measurements are taken both before and after a treatment . The design means that you are able to see the effects of some type of treatment on a group. Pretest posttest designs may be quasi-experimental, which means that participants are not assigned randomly. However, the most usual method is to randomly assign the participants to groups in order to control for confounding variables. Three main types of pretest post design are commonly used:

  • Randomized Control-Group Pretest Posttest Design.
  • Randomized Solomon Four-Group Design.
  • Nonrandomized Control Group Pretest-Posttest Design.

1. Randomized Control-Group Pretest Posttest Design.

The pre-test post-test control group design is also called the classic controlled experimental design . The design includes both a control and a treatment group. For example, if you wanted to gauge if a new way of teaching math was effective, you could:

  • Randomly assign participants to a treatment group or a control group .
  • Administer a pre-test to the treatment group and the control group.
  • Use the new teaching method on the treatment group and the standard method on the control group, ensuring that the method of treatment is the only condition that is different.
  • Administer a post-test to both groups.
  • Assess the differences between groups.

Two issues can affect the Randomized Control-Group Pretest Posttest Design:

  • Internal validity issues: maturation (i.e. biological changes in participants can affect differences between pre- and post-tests) and history (where participants experience something outside of the treatment that can affect scores).
  • External validity issues : Interaction of the pre-test and the treatment can occur if participants are influenced by the tone or content of the question. For example, a question about how many hours a student spends on homework might prompt the student to spend more time on homework.

2. Randomized Solomon Four-Group Design.

In this type of pretest posttest design, four groups are randomly assigned: two experimental groups E1/E2 and two control groups C1/C2. Groups E1 and C1 complete a pre-test and all four groups complete a post-test. This better controls for the interaction of pretesting and posttesting; in the “classic” design, participants may be unduly influenced by the questions on the pretest.

3. Nonrandomized Control Group Pretest-Posttest Design.

This type of test is similar to the “classic” design, but participants are not randomly assigned to groups. Nonrandomization can be more practical in real-life, when you are dealing with groups like students or employees who are already in classes or departments; randomization (i.e. moving people around to form new groups) could prove disruptive. This type of experimental design suffers from problems with internal validity more so than the other two types. Back to Top.

quasi-experimental design

What is a Quasi-Experimental Design?

A quasi-experimental design has much the same components as a regular experiment, but is missing one or more key components. The three key components of a traditional experiment are:

  • Pre-post test design.
  • Treatment and control groups.
  • Random assignment of subjects to groups.

You may want or need to deliberately leave out one of these key components. This could be for ethical or methodological reasons. For example:

  • It would be unethical to withhold treatment from a control group. This is usually the case with life-threatening illness, like cancer.
  • It would be unethical to treat patients; for example, you might want to find out if a certain drug causes blindness.
  • A regular experiment might be expensive and impossible to fund.
  • An experiment could technically fail due to loss of participants, but potentially produce useful data.
  • It might be logistically impossible to control for all variables in a regular experiment.

These types of issues crop up frequently, leading to the widespread acceptance of quasi-experimental designs — especially in the social sciences. Quasi-experimental designs are generally regarded as unreliable and unscientific in the physical and biological sciences.

Some experiments naturally fall into groups. For example, you might want to compare educational experiences of first, middle and last born children. Random assignment isn’t possible, so these experiments are quasi-experimental by nature.

Quasi-Experimental Design Examples.

The general form of a quasi-experimental design thesis statement is “What effect does (a certain intervention or program) have on a (specific population)”?

Example 1 : Does smoking during pregnancy leads to low birth weight? It would be unethical to randomly assign one group of mothers packs of cigarettes to smoke. The researcher instead asks the mothers if they smoked during pregnancy and assigns them to groups after the fact.

Example 2 : Does thoughtfully designed software improve learning outcomes for students? This study used a pre-post test design and multiple classrooms to show how technology can be successfully implemented in schools.

Example 3 : Can being mentored for your job lead to increased job satisfaction? This study followed 73 employees, some who were mentored and some who were not. Back to Top.

What is Randomized Block Design?

In randomized block design, the researcher divides experimental subjects into homogeneous blocks. Treatments are then randomly assigned to the blocks. The variability within blocks should be greater than the variability between blocks. In other words, you need to make sure that the blocks contain subjects that are very similar. For example, you could put males in one block and females in a second block. This method is practically identical to stratified random sampling (SRS), except the blocks in SRS are called “ strata .” Randomized block design reduces variability in experiments.

randomized block design

Age isn’t the only potential source of variability. Other blocking factors that you could consider for this type of experiment include:

  • Consumption of certain foods.
  • Use of over the counter food supplements.
  • Adherence to dosing regimen.
  • Differences in metabolism due to genetic differences, liver or kidney issues, race, or sex.
  • Coexistence of other disorders.
  • Use of other drugs.

Randomized block experimental design is sometimes called randomized complete block experimental design , because the word “complete” makes it clear that all subjects are included in the experiment, not just a sample. However, the setup of the experiment usually makes it clear that all subjects are included, so most people will drop the word complete . Back to Top.

What is a Randomized Controlled Trial?

randomized controlled trial

A randomized controlled trial is an experiment where the participants are randomly allocated to two or more groups to test a specific treatment or drug. Participants are assigned to either an experimental group or a comparison group. Random allocation means that all participants have the same chance of being placed in either group. The experimental group receives a treatment or intervention, for example:

  • Diagnostic Tests.
  • Experimental medication.
  • Interventional procedures.
  • Screening programs.
  • Specific types of education.

Participants in the comparison group receive a placebo (a dummy treatment), an alternative treatment, or no treatment at all. There are many randomization methods available. For example, simple random sampling , stratified random sampling or systematic random sampling. The common factor for all methods is that researchers, patients and other parties cannot tell ahead of time who will be placed in which group.

Advantages and Disadvantages of Randomized Controlled Trials

  • Random allocation can cancel out population bias ; it ensures that any other possible causes for the experimental results are split equally between groups.
  • Blinding is easy to include in this type of experiment.
  • Results from the experiment can be analyzed with statistical tests and used to infer other possibilities, like the likelihood of the method working for all populations.
  • Participants are readily identifiable as members of a specific population./li>
  • Generally more expensive and more time consuming than other methods.
  • Very large sample sizes (over 5,000 participants) are often needed.
  • Random controlled trials cannot uncover causation/risk factors. For example, ethical concerns would prevent a randomized controlled trial investigating the risk factors for smoking.
  • This type of experimental design is unsuitable for outcomes which take a long time to develop. Cohort studies may be a more suitable alternative.
  • Some programs, for example cancer screening, are unsuited for random allocation of participants (again, due to ethical concerns).
  • Volunteer bias can be an issue.

What is a Within Subjects Experimental Design?

within subjects design

In a within subjects experimental design, participants are assigned more than one treatment: each participant experiences all the levels for any categorical explanatory variable . The levels can be ordered, like height or time. Or they can be un-ordered. For example, let’s say you are testing if blood pressure is raised when watching horror movies vs. romantic comedies. You could have all the participants watch a scary movie, then measure their blood pressure. Later, the same group of people watch a romantic comedy, and their blood pressure is measured.

Within subjects designs are frequently used in pre-test/post-test scenarios. For example, if a teacher wants to find out if a new classroom strategy is effective, they might test children before the strategy is in place and then after the strategy is in place.

Within subjects designs are similar to other analysis of variance designs, in that it’s possible to have a single independent variable, or multiple factorial independent variables. For example, three different depression inventories could be given at one, three, and six month intervals.

Advantages and Disadvantages of Within Subjects Experimental Design.

  • It requires fewer participants than the between subjects design. If a between subjects design were used for the blood pressure example above, double the amount of participants would be required. Within subjects design therefore requires fewer resources and is generally cheaper.
  • Individual difference between participants are controlled for, as each participant acts as their own control. As the subjects are measured multiple times, this better enables the researcher to hone in on individual differences so that they can be removed from the analysis.
  • Effects from one test could carry over to the next, a phenomenon called the “range effect.” In the blood pressure example, if participants were asked to watch the scary movie first, their blood pressure could stay elevated for hours afterwards, skewing the results from the romantic comedy.
  • Participants can exhibit “practice effects”, where they improve scores simply by taking the same test multiple times. This is often an issue on pre-test/post-test studies.
  • Data is not completely independent, which may effect running hypothesis tests , like ANOVA .

References : Merck Manual. Retrieved Jan 1, 2016 from: http://www.merckmanuals.com/professional/clinical-pharmacology/factors-affecting-response-to-drugs/introduction-to-factors-affecting-response-to-drugs Penn State: Basic Principles of DOE. Retrieved Jan 1, 2016 from: https://onlinecourses.science.psu.edu/stat503/node/67 Image: SUNY Downstate. Retrieved Jan 1, 2016 from: http://library.downstate.edu/EBM2/2200.htm

5. Related Topics

  • Accuracy and Precision .
  • Block plots .
  • Cluster Randomization .
  • What is Clustering?
  • What is the Cohort Effect?
  • What is a Control Group?
  • What is Counterbalancing?
  • Data Collection Methods
  • What is an Effect Size?
  • What is a Experimental Group (or Treatment Group)?
  • Fixed, Random, and Mixed Effects Models
  • What are generalizability and transferability?
  • What is Grounded Theory?
  • The Hawthorne Effect .
  • The Hazard Ratio.
  • Inter-rater Reliability.
  • Main Effects .
  • Order Effects .
  • The Placebo Effect
  • What is the Practice Effect?
  • Primary and Secondary Data .
  • What is Qualitative Research?
  • What is Quantitative Research?
  • What is a Randomized Clinical Trial?
  • Random Selection and Assignment.
  • Randomization .
  • Recall Bias .
  • What is Response Bias?
  • Research Methods (includes Quantitative and Qualitative).
  • Subgroup Analysis .
  • What is Survey Sampling?
  • Systematic Errors.
  • Treatment Diffusion.

Agresti A. (1990) Categorical Data Analysis. John Wiley and Sons, New York. Cook, T. (2005). Introduction to Statistical Methods for Clinical Trials(Chapman & Hall/CRC Texts in Statistical Science) 1st Edition. Chapman and Hall/CRC Friedman (2015). Fundamentals of Clinical Trials 5th ed. Springer.” Dodge, Y. (2008). The Concise Encyclopedia of Statistics . Springer. Everitt, B. S.; Skrondal, A. (2010), The Cambridge Dictionary of Statistics , Cambridge University Press. Gonick, L. (1993). The Cartoon Guide to Statistics . HarperPerennial. Kotz, S.; et al., eds. (2006), Encyclopedia of Statistical Sciences , Wiley. Levine, D. (2014). Even You Can Learn Statistics and Analytics: An Easy to Understand Guide to Statistics and Analytics 3rd Edition. Pearson FT Press UPenn. http://finzi.psych.upenn.edu/library/granovaGG/html/blood_lead.html. Retrieved May 1, 2020.

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Part 3: Using quantitative methods

13. Experimental design

Chapter outline.

  • What is an experiment and when should you use one? (8 minute read)
  • True experimental designs (7 minute read)
  • Quasi-experimental designs (8 minute read)
  • Non-experimental designs (5 minute read)
  • Critical, ethical, and critical considerations  (5 minute read)

Content warning : examples in this chapter contain references to non-consensual research in Western history, including experiments conducted during the Holocaust and on African Americans (section 13.6).

13.1 What is an experiment and when should you use one?

Learning objectives.

Learners will be able to…

  • Identify the characteristics of a basic experiment
  • Describe causality in experimental design
  • Discuss the relationship between dependent and independent variables in experiments
  • Explain the links between experiments and generalizability of results
  • Describe advantages and disadvantages of experimental designs

The basics of experiments

The first experiment I can remember using was for my fourth grade science fair. I wondered if latex- or oil-based paint would hold up to sunlight better. So, I went to the hardware store and got a few small cans of paint and two sets of wooden paint sticks. I painted one with oil-based paint and the other with latex-based paint of different colors and put them in a sunny spot in the back yard. My hypothesis was that the oil-based paint would fade the most and that more fading would happen the longer I left the paint sticks out. (I know, it’s obvious, but I was only 10.)

I checked in on the paint sticks every few days for a month and wrote down my observations. The first part of my hypothesis ended up being wrong—it was actually the latex-based paint that faded the most. But the second part was right, and the paint faded more and more over time. This is a simple example, of course—experiments get a heck of a lot more complex than this when we’re talking about real research.

Merriam-Webster defines an experiment   as “an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.” Each of these three components of the definition will come in handy as we go through the different types of experimental design in this chapter. Most of us probably think of the physical sciences when we think of experiments, and for good reason—these experiments can be pretty flashy! But social science and psychological research follow the same scientific methods, as we’ve discussed in this book.

As the video discusses, experiments can be used in social sciences just like they can in physical sciences. It makes sense to use an experiment when you want to determine the cause of a phenomenon with as much accuracy as possible. Some types of experimental designs do this more precisely than others, as we’ll see throughout the chapter. If you’ll remember back to Chapter 11  and the discussion of validity, experiments are the best way to ensure internal validity, or the extent to which a change in your independent variable causes a change in your dependent variable.

Experimental designs for research projects are most appropriate when trying to uncover or test a hypothesis about the cause of a phenomenon, so they are best for explanatory research questions. As we’ll learn throughout this chapter, different circumstances are appropriate for different types of experimental designs. Each type of experimental design has advantages and disadvantages, and some are better at controlling the effect of extraneous variables —those variables and characteristics that have an effect on your dependent variable, but aren’t the primary variable whose influence you’re interested in testing. For example, in a study that tries to determine whether aspirin lowers a person’s risk of a fatal heart attack, a person’s race would likely be an extraneous variable because you primarily want to know the effect of aspirin.

In practice, many types of experimental designs can be logistically challenging and resource-intensive. As practitioners, the likelihood that we will be involved in some of the types of experimental designs discussed in this chapter is fairly low. However, it’s important to learn about these methods, even if we might not ever use them, so that we can be thoughtful consumers of research that uses experimental designs.

While we might not use all of these types of experimental designs, many of us will engage in evidence-based practice during our time as social workers. A lot of research developing evidence-based practice, which has a strong emphasis on generalizability, will use experimental designs. You’ve undoubtedly seen one or two in your literature search so far.

The logic of experimental design

How do we know that one phenomenon causes another? The complexity of the social world in which we practice and conduct research means that causes of social problems are rarely cut and dry. Uncovering explanations for social problems is key to helping clients address them, and experimental research designs are one road to finding answers.

As you read about in Chapter 8 (and as we’ll discuss again in Chapter 15 ), just because two phenomena are related in some way doesn’t mean that one causes the other. Ice cream sales increase in the summer, and so does the rate of violent crime; does that mean that eating ice cream is going to make me murder someone? Obviously not, because ice cream is great. The reality of that relationship is far more complex—it could be that hot weather makes people more irritable and, at times, violent, while also making people want ice cream. More likely, though, there are other social factors not accounted for in the way we just described this relationship.

Experimental designs can help clear up at least some of this fog by allowing researchers to isolate the effect of interventions on dependent variables by controlling extraneous variables . In true experimental design (discussed in the next section) and some quasi-experimental designs, researchers accomplish this w ith the control group and the experimental group . (The experimental group is sometimes called the “treatment group,” but we will call it the experimental group in this chapter.) The control group does not receive the intervention you are testing (they may receive no intervention or what is known as “treatment as usual”), while the experimental group does. (You will hopefully remember our earlier discussion of control variables in Chapter 8 —conceptually, the use of the word “control” here is the same.)

experimental design 3 variables

In a well-designed experiment, your control group should look almost identical to your experimental group in terms of demographics and other relevant factors. What if we want to know the effect of CBT on social anxiety, but we have learned in prior research that men tend to have a more difficult time overcoming social anxiety? We would want our control and experimental groups to have a similar gender mix because it would limit the effect of gender on our results, since ostensibly, both groups’ results would be affected by gender in the same way. If your control group has 5 women, 6 men, and 4 non-binary people, then your experimental group should be made up of roughly the same gender balance to help control for the influence of gender on the outcome of your intervention. (In reality, the groups should be similar along other dimensions, as well, and your group will likely be much larger.) The researcher will use the same outcome measures for both groups and compare them, and assuming the experiment was designed correctly, get a pretty good answer about whether the intervention had an effect on social anxiety.

You will also hear people talk about comparison groups , which are similar to control groups. The primary difference between the two is that a control group is populated using random assignment, but a comparison group is not. Random assignment entails using a random process to decide which participants are put into the control or experimental group (which participants receive an intervention and which do not). By randomly assigning participants to a group, you can reduce the effect of extraneous variables on your research because there won’t be a systematic difference between the groups.

Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other related fields. Random sampling also helps a great deal with generalizability , whereas random assignment increases internal validity .

We have already learned about internal validity in Chapter 11 . The use of an experimental design will bolster internal validity since it works to isolate causal relationships. As we will see in the coming sections, some types of experimental design do this more effectively than others. It’s also worth considering that true experiments, which most effectively show causality , are often difficult and expensive to implement. Although other experimental designs aren’t perfect, they still produce useful, valid evidence and may be more feasible to carry out.

Key Takeaways

  • Experimental designs are useful for establishing causality, but some types of experimental design do this better than others.
  • Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables .
  • Experiments use a control/comparison group and an experimental group to test the effects of interventions. These groups should be as similar to each other as possible in terms of demographics and other relevant factors.
  • True experiments have control groups with randomly assigned participants, while other types of experiments have comparison groups to which participants are not randomly assigned.
  • Think about the research project you’ve been designing so far. How might you use a basic experiment to answer your question? If your question isn’t explanatory, try to formulate a new explanatory question and consider the usefulness of an experiment.
  • Why is establishing a simple relationship between two variables not indicative of one causing the other?

13.2 True experimental design

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.

As we discussed in the previous section, a true experiment has a control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.

Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.

Table 13.1 Experimental research design notations
R Randomly assigned group (control/comparison or experimental)
O Observation/measurement taken of dependent variable
X Intervention or treatment
X Experimental or new intervention
X Typical intervention/treatment as usual
A, B, C, etc. Denotes different groups (control/comparison and experimental)

Pretest and post-test control group design

In pretest and post-test control group design , participants are given a pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a post-test .

experimental design 3 variables

In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1 denotes the pre-test, X e denotes the experimental intervention, and O 2 denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.

experimental design 3 variables

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i denoting treatment as usual (Figure 13.3).

experimental design 3 variables

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.

Post-test only control group design

Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).

experimental design 3 variables

But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444) [1] (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.

Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.

However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.

experimental design 3 variables

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
  • What hypothesis(es) would you test using this true experiment?

13.4 Quasi-experimental designs

  • Describe a quasi-experimental design in social work research
  • Understand the different types of quasi-experimental designs
  • Determine what kinds of research questions quasi-experimental designs are suited for
  • Discuss advantages and disadvantages of quasi-experimental designs

Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don’t do as good a job of giving us robust proof of causality , they still allow us to establish time order , which is a key element of causality. The prefix quasi means “resembling,” so quasi-experimental research is research that resembles experimental research, but is not true experimental research. Nonetheless, given proper research design, quasi-experiments can still provide extremely rigorous and useful results.

There are a few key differences between true experimental and quasi-experimental research. The primary difference between quasi-experimental research and true experimental research is that quasi-experimental research does not involve random assignment to control and experimental groups. Instead, we talk about comparison groups in quasi-experimental research instead. As a result, these types of experiments don’t control the effect of extraneous variables as well as a true experiment.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention.  We’re able to eliminate some threats to internal validity, but we can’t do this as effectively as we can with a true experiment.  Realistically, our CBT-social anxiety project is likely to be a quasi experiment, based on the resources and participant pool we’re likely to have available. 

It’s important to note that not all quasi-experimental designs have a comparison group.  There are many different kinds of quasi-experiments, but we will discuss the three main types below: nonequivalent comparison group designs, time series designs, and ex post facto comparison group designs.

Nonequivalent comparison group design

You will notice that this type of design looks extremely similar to the pretest/post-test design that we discussed in section 13.3. But instead of random assignment to control and experimental groups, researchers use other methods to construct their comparison and experimental groups. A diagram of this design will also look very similar to pretest/post-test design, but you’ll notice we’ve removed the “R” from our groups, since they are not randomly assigned (Figure 13.6).

experimental design 3 variables

Researchers using this design select a comparison group that’s as close as possible based on relevant factors to their experimental group. Engel and Schutt (2017) [2] identify two different selection methods:

  • Individual matching : Researchers take the time to match individual cases in the experimental group to similar cases in the comparison group. It can be difficult, however, to match participants on all the variables you want to control for.
  • Aggregate matching : Instead of trying to match individual participants to each other, researchers try to match the population profile of the comparison and experimental groups. For example, researchers would try to match the groups on average age, gender balance, or median income. This is a less resource-intensive matching method, but researchers have to ensure that participants aren’t choosing which group (comparison or experimental) they are a part of.

As we’ve already talked about, this kind of design provides weaker evidence that the intervention itself leads to a change in outcome. Nonetheless, we are still able to establish time order using this method, and can thereby show an association between the intervention and the outcome. Like true experimental designs, this type of quasi-experimental design is useful for explanatory research questions.

What might this look like in a practice setting? Let’s say you’re working at an agency that provides CBT and other types of interventions, and you have identified a group of clients who are seeking help for social anxiety, as in our earlier example. Once you’ve obtained consent from your clients, you can create a comparison group using one of the matching methods we just discussed. If the group is small, you might match using individual matching, but if it’s larger, you’ll probably sort people by demographics to try to get similar population profiles. (You can do aggregate matching more easily when your agency has some kind of electronic records or database, but it’s still possible to do manually.)

Time series design

Another type of quasi-experimental design is a time series design. Unlike other types of experimental design, time series designs do not have a comparison group. A time series is a set of measurements taken at intervals over a period of time (Figure 13.7). Proper time series design should include at least three pre- and post-intervention measurement points. While there are a few types of time series designs, we’re going to focus on the most common: interrupted time series design.

experimental design 3 variables

But why use this method? Here’s an example. Let’s think about elementary student behavior throughout the school year. As anyone with children or who is a teacher knows, kids get very excited and animated around holidays, days off, or even just on a Friday afternoon. This fact might mean that around those times of year, there are more reports of disruptive behavior in classrooms. What if we took our one and only measurement in mid-December? It’s possible we’d see a higher-than-average rate of disruptive behavior reports, which could bias our results if our next measurement is around a time of year students are in a different, less excitable frame of mind. When we take multiple measurements throughout the first half of the school year, we can establish a more accurate baseline for the rate of these reports by looking at the trend over time.

We may want to test the effect of extended recess times in elementary school on reports of disruptive behavior in classrooms. When students come back after the winter break, the school extends recess by 10 minutes each day (the intervention), and the researchers start tracking the monthly reports of disruptive behavior again. These reports could be subject to the same fluctuations as the pre-intervention reports, and so we once again take multiple measurements over time to try to control for those fluctuations.

This method improves the extent to which we can establish causality because we are accounting for a major extraneous variable in the equation—the passage of time. On its own, it does not allow us to account for other extraneous variables, but it does establish time order and association between the intervention and the trend in reports of disruptive behavior. Finding a stable condition before the treatment that changes after the treatment is evidence for causality between treatment and outcome.

Ex post facto comparison group design

Ex post facto (Latin for “after the fact”) designs are extremely similar to nonequivalent comparison group designs. There are still comparison and experimental groups, pretest and post-test measurements, and an intervention. But in ex post facto designs, participants are assigned to the comparison and experimental groups once the intervention has already happened. This type of design often occurs when interventions are already up and running at an agency and the agency wants to assess effectiveness based on people who have already completed treatment.

In most clinical agency environments, social workers conduct both initial and exit assessments, so there are usually some kind of pretest and post-test measures available. We also typically collect demographic information about our clients, which could allow us to try to use some kind of matching to construct comparison and experimental groups.

In terms of internal validity and establishing causality, ex post facto designs are a bit of a mixed bag. The ability to establish causality depends partially on the ability to construct comparison and experimental groups that are demographically similar so we can control for these extraneous variables .

Quasi-experimental designs are common in social work intervention research because, when designed correctly, they balance the intense resource needs of true experiments with the realities of research in practice. They still offer researchers tools to gather robust evidence about whether interventions are having positive effects for clients.

  • Quasi-experimental designs are similar to true experiments, but do not require random assignment to experimental and control groups.
  • In quasi-experimental projects, the group not receiving the treatment is called the comparison group, not the control group.
  • Nonequivalent comparison group design is nearly identical to pretest/post-test experimental design, but participants are not randomly assigned to the experimental and control groups. As a result, this design provides slightly less robust evidence for causality.
  • Nonequivalent groups can be constructed by individual matching or aggregate matching .
  • Time series design does not have a control or experimental group, and instead compares the condition of participants before and after the intervention by measuring relevant factors at multiple points in time. This allows researchers to mitigate the error introduced by the passage of time.
  • Ex post facto comparison group designs are also similar to true experiments, but experimental and comparison groups are constructed after the intervention is over. This makes it more difficult to control for the effect of extraneous variables, but still provides useful evidence for causality because it maintains the time order of the experiment.
  • Think back to the experiment you considered for your research project in Section 13.3. Now that you know more about quasi-experimental designs, do you still think it’s a true experiment? Why or why not?
  • What should you consider when deciding whether an experimental or quasi-experimental design would be more feasible or fit your research question better?

13.5 Non-experimental designs

  • Describe non-experimental designs in social work research
  • Discuss how non-experimental research differs from true and quasi-experimental research
  • Demonstrate an understanding the different types of non-experimental designs
  • Determine what kinds of research questions non-experimental designs are suited for
  • Discuss advantages and disadvantages of non-experimental designs

The previous sections have laid out the basics of some rigorous approaches to establish that an intervention is responsible for changes we observe in research participants. This type of evidence is extremely important to build an evidence base for social work interventions, but it’s not the only type of evidence to consider. We will discuss qualitative methods, which provide us with rich, contextual information, in Part 4 of this text. The designs we’ll talk about in this section are sometimes used in qualitative research  but in keeping with our discussion of experimental design so far, we’re going to stay in the quantitative research realm for now. Non-experimental is also often a stepping stone for more rigorous experimental design in the future, as it can help test the feasibility of your research.

In general, non-experimental designs do not strongly support causality and don’t address threats to internal validity. However, that’s not really what they’re intended for. Non-experimental designs are useful for a few different types of research, including explanatory questions in program evaluation. Certain types of non-experimental design are also helpful for researchers when they are trying to develop a new assessment or scale. Other times, researchers or agency staff did not get a chance to gather any assessment information before an intervention began, so a pretest/post-test design is not possible.

A genderqueer person sitting on a couch, talking to a therapist in a brightly-lit room

A significant benefit of these types of designs is that they’re pretty easy to execute in a practice or agency setting. They don’t require a comparison or control group, and as Engel and Schutt (2017) [3] point out, they “flow from a typical practice model of assessment, intervention, and evaluating the impact of the intervention” (p. 177). Thus, these designs are fairly intuitive for social workers, even when they aren’t expert researchers. Below, we will go into some detail about the different types of non-experimental design.

One group pretest/post-test design

Also known as a before-after one-group design, this type of research design does not have a comparison group and everyone who participates in the research receives the intervention (Figure 13.8). This is a common type of design in program evaluation in the practice world. Controlling for extraneous variables is difficult or impossible in this design, but given that it is still possible to establish some measure of time order, it does provide weak support for causality.

experimental design 3 variables

Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could assess students’ attitudes about illegal drugs (O 1 ), implement the anti-drug program (X), and then immediately after the program ends, the researcher could once again measure students’ attitudes toward illegal drugs (O 2 ). You can see how this would be relatively simple to do in practice, and have probably been involved in this type of research design yourself, even if informally. But hopefully, you can also see that this design would not provide us with much evidence for causality because we have no way of controlling for the effect of extraneous variables. A lot of things could have affected any change in students’ attitudes—maybe girls already had different attitudes about illegal drugs than children of other genders, and when we look at the class’s results as a whole, we couldn’t account for that influence using this design.

All of that doesn’t mean these results aren’t useful, however. If we find that children’s attitudes didn’t change at all after the drug education program, then we need to think seriously about how to make it more effective or whether we should be using it at all. (This immediate, practical application of our results highlights a key difference between program evaluation and research, which we will discuss in Chapter 23 .)

After-only design

As the name suggests, this type of non-experimental design involves measurement only after an intervention. There is no comparison or control group, and everyone receives the intervention. I have seen this design repeatedly in my time as a program evaluation consultant for nonprofit organizations, because often these organizations realize too late that they would like to or need to have some sort of measure of what effect their programs are having.

Because there is no pretest and no comparison group, this design is not useful for supporting causality since we can’t establish the time order and we can’t control for extraneous variables. However, that doesn’t mean it’s not useful at all! Sometimes, agencies need to gather information about how their programs are functioning. A classic example of this design is satisfaction surveys—realistically, these can only be administered after a program or intervention. Questions regarding satisfaction, ease of use or engagement, or other questions that don’t involve comparisons are best suited for this type of design.

Static-group design

A final type of non-experimental research is the static-group design. In this type of research, there are both comparison and experimental groups, which are not randomly assigned. There is no pretest, only a post-test, and the comparison group has to be constructed by the researcher. Sometimes, researchers will use matching techniques to construct the groups, but often, the groups are constructed by convenience of who is being served at the agency.

Non-experimental research designs are easy to execute in practice, but we must be cautious about drawing causal conclusions from the results. A positive result may still suggest that we should continue using a particular intervention (and no result or a negative result should make us reconsider whether we should use that intervention at all). You have likely seen non-experimental research in your daily life or at your agency, and knowing the basics of how to structure such a project will help you ensure you are providing clients with the best care possible.

  • Non-experimental designs are useful for describing phenomena, but cannot demonstrate causality.
  • After-only designs are often used in agency and practice settings because practitioners are often not able to set up pre-test/post-test designs.
  • Non-experimental designs are useful for explanatory questions in program evaluation and are helpful for researchers when they are trying to develop a new assessment or scale.
  • Non-experimental designs are well-suited to qualitative methods.
  • If you were to use a non-experimental design for your research project, which would you choose? Why?
  • Have you conducted non-experimental research in your practice or professional life? Which type of non-experimental design was it?

13.6 Critical, ethical, and cultural considerations

  • Describe critiques of experimental design
  • Identify ethical issues in the design and execution of experiments
  • Identify cultural considerations in experimental design

As I said at the outset, experiments, and especially true experiments, have long been seen as the gold standard to gather scientific evidence. When it comes to research in the biomedical field and other physical sciences, true experiments are subject to far less nuance than experiments in the social world. This doesn’t mean they are easier—just subject to different forces. However, as a society, we have placed the most value on quantitative evidence obtained through empirical observation and especially experimentation.

Major critiques of experimental designs tend to focus on true experiments, especially randomized controlled trials (RCTs), but many of these critiques can be applied to quasi-experimental designs, too. Some researchers, even in the biomedical sciences, question the view that RCTs are inherently superior to other types of quantitative research designs. RCTs are far less flexible and have much more stringent requirements than other types of research. One seemingly small issue, like incorrect information about a research participant, can derail an entire RCT. RCTs also cost a great deal of money to implement and don’t reflect “real world” conditions. The cost of true experimental research or RCTs also means that some communities are unlikely to ever have access to these research methods. It is then easy for people to dismiss their research findings because their methods are seen as “not rigorous.”

Obviously, controlling outside influences is important for researchers to draw strong conclusions, but what if those outside influences are actually important for how an intervention works? Are we missing really important information by focusing solely on control in our research? Is a treatment going to work the same for white women as it does for indigenous women? With the myriad effects of our societal structures, you should be very careful ever assuming this will be the case. This doesn’t mean that cultural differences will negate the effect of an intervention; instead, it means that you should remember to practice cultural humility implementing all interventions, even when we “know” they work.

How we build evidence through experimental research reveals a lot about our values and biases, and historically, much experimental research has been conducted on white people, and especially white men. [4] This makes sense when we consider the extent to which the sciences and academia have historically been dominated by white patriarchy. This is especially important for marginalized groups that have long been ignored in research literature, meaning they have also been ignored in the development of interventions and treatments that are accepted as “effective.” There are examples of marginalized groups being experimented on without their consent, like the Tuskegee Experiment or Nazi experiments on Jewish people during World War II. We cannot ignore the collective consciousness situations like this can create about experimental research for marginalized groups.

None of this is to say that experimental research is inherently bad or that you shouldn’t use it. Quite the opposite—use it when you can, because there are a lot of benefits, as we learned throughout this chapter. As a social work researcher, you are uniquely positioned to conduct experimental research while applying social work values and ethics to the process and be a leader for others to conduct research in the same framework. It can conflict with our professional ethics, especially respect for persons and beneficence, if we do not engage in experimental research with our eyes wide open. We also have the benefit of a great deal of practice knowledge that researchers in other fields have not had the opportunity to get. As with all your research, always be sure you are fully exploring the limitations of the research.

  • While true experimental research gathers strong evidence, it can also be inflexible, expensive, and overly simplistic in terms of important social forces that affect the resources.
  • Marginalized communities’ past experiences with experimental research can affect how they respond to research participation.
  • Social work researchers should use both their values and ethics, and their practice experiences, to inform research and push other researchers to do the same.
  • Think back to the true experiment you sketched out in the exercises for Section 13.3. Are there cultural or historical considerations you hadn’t thought of with your participant group? What are they? Does this change the type of experiment you would want to do?
  • How can you as a social work researcher encourage researchers in other fields to consider social work ethics and values in their experimental research?

Media Attributions

  • Being kinder to yourself © Evgenia Makarova is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
  • Original by author is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Original by author. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Orginal by author. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • therapist © Zackary Drucker is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
  • nonexper-pretest-posttest is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Engel, R. & Schutt, R. (2016). The practice of research in social work. Thousand Oaks, CA: SAGE Publications, Inc. ↵
  • Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education ,  3 (3), 285-289. ↵

an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.

explains why particular phenomena work in the way that they do; answers “why” questions

variables and characteristics that have an effect on your outcome, but aren't the primary variable whose influence you're interested in testing.

the group of participants in our study who do not receive the intervention we are researching in experiments with random assignment

in experimental design, the group of participants in our study who do receive the intervention we are researching

the group of participants in our study who do not receive the intervention we are researching in experiments without random assignment

using a random process to decide which participants are tested in which conditions

The ability to apply research findings beyond the study sample to some broader population,

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

a type of experimental design in which participants are randomly assigned to control and experimental groups, one group receives an intervention, and both groups receive pre- and post-test assessments

A measure of a participant's condition before they receive an intervention or treatment.

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

a subtype of experimental design that is similar to a true experiment, but does not have randomly assigned control and treatment groups

In nonequivalent comparison group designs, the process by which researchers match individual cases in the experimental group to similar cases in the comparison group.

In nonequivalent comparison group designs, the process in which researchers match the population profile of the comparison and experimental groups.

a set of measurements taken at intervals over a period of time

Research that involves the use of data that represents human expression through words, pictures, movies, performance and other artifacts.

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 6 February 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

  • Types of experimental

Log in or sign up

Get started for free

Statistical Design and Analysis of Biological Experiments

Chapter 1 principles of experimental design, 1.1 introduction.

The validity of conclusions drawn from a statistical analysis crucially hinges on the manner in which the data are acquired, and even the most sophisticated analysis will not rescue a flawed experiment. Planning an experiment and thinking about the details of data acquisition is so important for a successful analysis that R. A. Fisher—who single-handedly invented many of the experimental design techniques we are about to discuss—famously wrote

To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ( Fisher 1938 )

(Statistical) design of experiments provides the principles and methods for planning experiments and tailoring the data acquisition to an intended analysis. Design and analysis of an experiment are best considered as two aspects of the same enterprise: the goals of the analysis strongly inform an appropriate design, and the implemented design determines the possible analyses.

The primary aim of designing experiments is to ensure that valid statistical and scientific conclusions can be drawn that withstand the scrutiny of a determined skeptic. Good experimental design also considers that resources are used efficiently, and that estimates are sufficiently precise and hypothesis tests adequately powered. It protects our conclusions by excluding alternative interpretations or rendering them implausible. Three main pillars of experimental design are randomization , replication , and blocking , and we will flesh out their effects on the subsequent analysis as well as their implementation in an experimental design.

An experimental design is always tailored towards predefined (primary) analyses and an efficient analysis and unambiguous interpretation of the experimental data is often straightforward from a good design. This does not prevent us from doing additional analyses of interesting observations after the data are acquired, but these analyses can be subjected to more severe criticisms and conclusions are more tentative.

In this chapter, we provide the wider context for using experiments in a larger research enterprise and informally introduce the main statistical ideas of experimental design. We use a comparison of two samples as our main example to study how design choices affect an analysis, but postpone a formal quantitative analysis to the next chapters.

1.2 A Cautionary Tale

For illustrating some of the issues arising in the interplay of experimental design and analysis, we consider a simple example. We are interested in comparing the enzyme levels measured in processed blood samples from laboratory mice, when the sample processing is done either with a kit from a vendor A, or a kit from a competitor B. For this, we take 20 mice and randomly select 10 of them for sample preparation with kit A, while the blood samples of the remaining 10 mice are prepared with kit B. The experiment is illustrated in Figure 1.1 A and the resulting data are given in Table 1.1 .

Table 1.1: Measured enzyme levels from samples of twenty mice. Samples of ten mice each were processed using a kit of vendor A and B, respectively.
A 8.96 8.95 11.37 12.63 11.38 8.36 6.87 12.35 10.32 11.99
B 12.68 11.37 12.00 9.81 10.35 11.76 9.01 10.83 8.76 9.99

One option for comparing the two kits is to look at the difference in average enzyme levels, and we find an average level of 10.32 for vendor A and 10.66 for vendor B. We would like to interpret their difference of -0.34 as the difference due to the two preparation kits and conclude whether the two kits give equal results or if measurements based on one kit are systematically different from those based on the other kit.

Such interpretation, however, is only valid if the two groups of mice and their measurements are identical in all aspects except the sample preparation kit. If we use one strain of mice for kit A and another strain for kit B, any difference might also be attributed to inherent differences between the strains. Similarly, if the measurements using kit B were conducted much later than those using kit A, any observed difference might be attributed to changes in, e.g., mice selected, batches of chemicals used, device calibration, or any number of other influences. None of these competing explanations for an observed difference can be excluded from the given data alone, but good experimental design allows us to render them (almost) arbitrarily implausible.

A second aspect for our analysis is the inherent uncertainty in our calculated difference: if we repeat the experiment, the observed difference will change each time, and this will be more pronounced for a smaller number of mice, among others. If we do not use a sufficient number of mice in our experiment, the uncertainty associated with the observed difference might be too large, such that random fluctuations become a plausible explanation for the observed difference. Systematic differences between the two kits, of practically relevant magnitude in either direction, might then be compatible with the data, and we can draw no reliable conclusions from our experiment.

In each case, the statistical analysis—no matter how clever—was doomed before the experiment was even started, while simple ideas from statistical design of experiments would have provided correct and robust results with interpretable conclusions.

1.3 The Language of Experimental Design

By an experiment we understand an investigation where the researcher has full control over selecting and altering the experimental conditions of interest, and we only consider investigations of this type. The selected experimental conditions are called treatments . An experiment is comparative if the responses to several treatments are to be compared or contrasted. The experimental units are the smallest subdivision of the experimental material to which a treatment can be assigned. All experimental units given the same treatment constitute a treatment group . Especially in biology, we often compare treatments to a control group to which some standard experimental conditions are applied; a typical example is using a placebo for the control group, and different drugs for the other treatment groups.

The values observed are called responses and are measured on the response units ; these are often identical to the experimental units but need not be. Multiple experimental units are sometimes combined into groupings or blocks , such as mice grouped by litter, or samples grouped by batches of chemicals used for their preparation. More generally, we call any grouping of the experimental material (even with group size one) a unit .

In our example, we selected the mice, used a single sample per mouse, deliberately chose the two specific vendors, and had full control over which kit to assign to which mouse. In other words, the two kits are the treatments and the mice are the experimental units. We took the measured enzyme level of a single sample from a mouse as our response, and samples are therefore the response units. The resulting experiment is comparative, because we contrast the enzyme levels between the two treatment groups.

Three designs to determine the difference between two preparation kits A and B based on four mice. A: One sample per mouse. Comparison between averages of samples with same kit. B: Two samples per mouse treated with the same kit. Comparison between averages of mice with same kit requires averaging responses for each mouse first. C: Two samples per mouse each treated with different kit. Comparison between two samples of each mouse, with differences averaged.

Figure 1.1: Three designs to determine the difference between two preparation kits A and B based on four mice. A: One sample per mouse. Comparison between averages of samples with same kit. B: Two samples per mouse treated with the same kit. Comparison between averages of mice with same kit requires averaging responses for each mouse first. C: Two samples per mouse each treated with different kit. Comparison between two samples of each mouse, with differences averaged.

In this example, we can coalesce experimental and response units, because we have a single response per mouse and cannot distinguish a sample from a mouse in the analysis, as illustrated in Figure 1.1 A for four mice. Responses from mice with the same kit are averaged, and the kit difference is the difference between these two averages.

By contrast, if we take two samples per mouse and use the same kit for both samples, then the mice are still the experimental units, but each mouse now groups the two response units associated with it. Now, responses from the same mouse are first averaged, and these averages are used to calculate the difference between kits; even though eight measurements are available, this difference is still based on only four mice (Figure 1.1 B).

If we take two samples per mouse, but apply each kit to one of the two samples, then the samples are both the experimental and response units, while the mice are blocks that group the samples. Now, we calculate the difference between kits for each mouse, and then average these differences (Figure 1.1 C).

If we only use one kit and determine the average enzyme level, then this investigation is still an experiment, but is not comparative.

To summarize, the design of an experiment determines the logical structure of the experiment ; it consists of (i) a set of treatments (the two kits); (ii) a specification of the experimental units (animals, cell lines, samples) (the mice in Figure 1.1 A,B and the samples in Figure 1.1 C); (iii) a procedure for assigning treatments to units; and (iv) a specification of the response units and the quantity to be measured as a response (the samples and associated enzyme levels).

1.4 Experiment Validity

Before we embark on the more technical aspects of experimental design, we discuss three components for evaluating an experiment’s validity: construct validity , internal validity , and external validity . These criteria are well-established in areas such as educational and psychological research, and have more recently been discussed for animal research ( Würbel 2017 ) where experiments are increasingly scrutinized for their scientific rationale and their design and intended analyses.

1.4.1 Construct Validity

Construct validity concerns the choice of the experimental system for answering our research question. Is the system even capable of providing a relevant answer to the question?

Studying the mechanisms of a particular disease, for example, might require careful choice of an appropriate animal model that shows a disease phenotype and is accessible to experimental interventions. If the animal model is a proxy for drug development for humans, biological mechanisms must be sufficiently similar between animal and human physiologies.

Another important aspect of the construct is the quantity that we intend to measure (the measurand ), and its relation to the quantity or property we are interested in. For example, we might measure the concentration of the same chemical compound once in a blood sample and once in a highly purified sample, and these constitute two different measurands, whose values might not be comparable. Often, the quantity of interest (e.g., liver function) is not directly measurable (or even quantifiable) and we measure a biomarker instead. For example, pre-clinical and clinical investigations may use concentrations of proteins or counts of specific cell types from blood samples, such as the CD4+ cell count used as a biomarker for immune system function.

1.4.2 Internal Validity

The internal validity of an experiment concerns the soundness of the scientific rationale, statistical properties such as precision of estimates, and the measures taken against risk of bias. It refers to the validity of claims within the context of the experiment. Statistical design of experiments plays a prominent role in ensuring internal validity, and we briefly discuss the main ideas before providing the technical details and an application to our example in the subsequent sections.

Scientific Rationale and Research Question

The scientific rationale of a study is (usually) not immediately a statistical question. Translating a scientific question into a quantitative comparison amenable to statistical analysis is no small task and often requires careful consideration. It is a substantial, if non-statistical, benefit of using experimental design that we are forced to formulate a precise-enough research question and decide on the main analyses required for answering it before we conduct the experiment. For example, the question: is there a difference between placebo and drug? is insufficiently precise for planning a statistical analysis and determine an adequate experimental design. What exactly is the drug treatment? What should the drug’s concentration be and how is it administered? How do we make sure that the placebo group is comparable to the drug group in all other aspects? What do we measure and what do we mean by “difference?” A shift in average response, a fold-change, change in response before and after treatment?

The scientific rationale also enters the choice of a potential control group to which we compare responses. The quote

The deep, fundamental question in statistical analysis is ‘Compared to what?’ ( Tufte 1997 )

highlights the importance of this choice.

There are almost never enough resources to answer all relevant scientific questions. We therefore define a few questions of highest interest, and the main purpose of the experiment is answering these questions in the primary analysis . This intended analysis drives the experimental design to ensure relevant estimates can be calculated and have sufficient precision, and tests are adequately powered. This does not preclude us from conducting additional secondary analyses and exploratory analyses , but we are not willing to enlarge the experiment to ensure that strong conclusions can also be drawn from these analyses.

Risk of Bias

Experimental bias is a systematic difference in response between experimental units in addition to the difference caused by the treatments. The experimental units in the different groups are then not equal in all aspects other than the treatment applied to them. We saw several examples in Section 1.2 .

Minimizing the risk of bias is crucial for internal validity and we look at some common measures to eliminate or reduce different types of bias in Section 1.5 .

Precision and Effect Size

Another aspect of internal validity is the precision of estimates and the expected effect sizes. Is the experimental setup, in principle, able to detect a difference of relevant magnitude? Experimental design offers several methods for answering this question based on the expected heterogeneity of samples, the measurement error, and other sources of variation: power analysis is a technique for determining the number of samples required to reliably detect a relevant effect size and provide estimates of sufficient precision. More samples yield more precision and more power, but we have to be careful that replication is done at the right level: simply measuring a biological sample multiple times as in Figure 1.1 B yields more measured values, but is pseudo-replication for analyses. Replication should also ensure that the statistical uncertainties of estimates can be gauged from the data of the experiment itself, without additional untestable assumptions. Finally, the technique of blocking , shown in Figure 1.1 C, can remove a substantial proportion of the variation and thereby increase power and precision if we find a way to apply it.

1.4.3 External Validity

The external validity of an experiment concerns its replicability and the generalizability of inferences. An experiment is replicable if its results can be confirmed by an independent new experiment, preferably by a different lab and researcher. Experimental conditions in the replicate experiment usually differ from the original experiment, which provides evidence that the observed effects are robust to such changes. A much weaker condition on an experiment is reproducibility , the property that an independent researcher draws equivalent conclusions based on the data from this particular experiment, using the same analysis techniques. Reproducibility requires publishing the raw data, details on the experimental protocol, and a description of the statistical analyses, preferably with accompanying source code. Many scientific journals subscribe to reporting guidelines to ensure reproducibility and these are also helpful for planning an experiment.

A main threat to replicability and generalizability are too tightly controlled experimental conditions, when inferences only hold for a specific lab under the very specific conditions of the original experiment. Introducing systematic heterogeneity and using multi-center studies effectively broadens the experimental conditions and therefore the inferences for which internal validity is available.

For systematic heterogeneity , experimental conditions are systematically altered in addition to the treatments, and treatment differences estimated for each condition. For example, we might split the experimental material into several batches and use a different day of analysis, sample preparation, batch of buffer, measurement device, and lab technician for each batch. A more general inference is then possible if effect size, effect direction, and precision are comparable between the batches, indicating that the treatment differences are stable over the different conditions.

In multi-center experiments , the same experiment is conducted in several different labs and the results compared and merged. Multi-center approaches are very common in clinical trials and often necessary to reach the required number of patient enrollments.

Generalizability of randomized controlled trials in medicine and animal studies can suffer from overly restrictive eligibility criteria. In clinical trials, patients are often included or excluded based on co-medications and co-morbidities, and the resulting sample of eligible patients might no longer be representative of the patient population. For example, Travers et al. ( 2007 ) used the eligibility criteria of 17 random controlled trials of asthma treatments and found that out of 749 patients, only a median of 6% (45 patients) would be eligible for an asthma-related randomized controlled trial. This puts a question mark on the relevance of the trials’ findings for asthma patients in general.

1.5 Reducing the Risk of Bias

1.5.1 randomization of treatment allocation.

If systematic differences other than the treatment exist between our treatment groups, then the effect of the treatment is confounded with these other differences and our estimates of treatment effects might be biased.

We remove such unwanted systematic differences from our treatment comparisons by randomizing the allocation of treatments to experimental units. In a completely randomized design , each experimental unit has the same chance of being subjected to any of the treatments, and any differences between the experimental units other than the treatments are distributed over the treatment groups. Importantly, randomization is the only method that also protects our experiment against unknown sources of bias: we do not need to know all or even any of the potential differences and yet their impact is eliminated from the treatment comparisons by random treatment allocation.

Randomization has two effects: (i) differences unrelated to treatment become part of the ‘statistical noise’ rendering the treatment groups more similar; and (ii) the systematic differences are thereby eliminated as sources of bias from the treatment comparison.

Randomization transforms systematic variation into random variation.

In our example, a proper randomization would select 10 out of our 20 mice fully at random, such that the probability of any one mouse being picked is 1/20. These ten mice are then assigned to kit A, and the remaining mice to kit B. This allocation is entirely independent of the treatments and of any properties of the mice.

To ensure random treatment allocation, some kind of random process needs to be employed. This can be as simple as shuffling a pack of 10 red and 10 black cards or using a software-based random number generator. Randomization is slightly more difficult if the number of experimental units is not known at the start of the experiment, such as when patients are recruited for an ongoing clinical trial (sometimes called rolling recruitment ), and we want to have reasonable balance between the treatment groups at each stage of the trial.

Seemingly random assignments “by hand” are usually no less complicated than fully random assignments, but are always inferior. If surprising results ensue from the experiment, such assignments are subject to unanswerable criticism and suspicion of unwanted bias. Even worse are systematic allocations; they can only remove bias from known causes, and immediately raise red flags under the slightest scrutiny.

The Problem of Undesired Assignments

Even with a fully random treatment allocation procedure, we might end up with an undesirable allocation. For our example, the treatment group of kit A might—just by chance—contain mice that are all bigger or more active than those in the other treatment group. Statistical orthodoxy recommends using the design nevertheless, because only full randomization guarantees valid estimates of residual variance and unbiased estimates of effects. This argument, however, concerns the long-run properties of the procedure and seems of little help in this specific situation. Why should we care if the randomization yields correct estimates under replication of the experiment, if the particular experiment is jeopardized?

Another solution is to create a list of all possible allocations that we would accept and randomly choose one of these allocations for our experiment. The analysis should then reflect this restriction in the possible randomizations, which often renders this approach difficult to implement.

The most pragmatic method is to reject highly undesirable designs and compute a new randomization ( Cox 1958 ) . Undesirable allocations are unlikely to arise for large sample sizes, and we might accept a small bias in estimation for small sample sizes, when uncertainty in the estimated treatment effect is already high. In this approach, whenever we reject a particular outcome, we must also be willing to reject the outcome if we permute the treatment level labels. If we reject eight big and two small mice for kit A, then we must also reject two big and eight small mice. We must also be transparent and report a rejected allocation, so that critics may come to their own conclusions about potential biases and their remedies.

1.5.2 Blinding

Bias in treatment comparisons is also introduced if treatment allocation is random, but responses cannot be measured entirely objectively, or if knowledge of the assigned treatment affects the response. In clinical trials, for example, patients might react differently when they know to be on a placebo treatment, an effect known as cognitive bias . In animal experiments, caretakers might report more abnormal behavior for animals on a more severe treatment. Cognitive bias can be eliminated by concealing the treatment allocation from technicians or participants of a clinical trial, a technique called single-blinding .

If response measures are partially based on professional judgement (such as a clinical scale), patient or physician might unconsciously report lower scores for a placebo treatment, a phenomenon known as observer bias . Its removal requires double blinding , where treatment allocations are additionally concealed from the experimentalist.

Blinding requires randomized treatment allocation to begin with and substantial effort might be needed to implement it. Drug companies, for example, have to go to great lengths to ensure that a placebo looks, tastes, and feels similar enough to the actual drug. Additionally, blinding is often done by coding the treatment conditions and samples, and effect sizes and statistical significance are calculated before the code is revealed.

In clinical trials, double-blinding creates a conflict of interest. The attending physicians do not know which patient received which treatment, and thus accumulation of side-effects cannot be linked to any treatment. For this reason, clinical trials have a data monitoring committee not involved in the final analysis, that performs intermediate analyses of efficacy and safety at predefined intervals. If severe problems are detected, the committee might recommend altering or aborting the trial. The same might happen if one treatment already shows overwhelming evidence of superiority, such that it becomes unethical to withhold this treatment from the other patients.

1.5.3 Analysis Plan and Registration

An often overlooked source of bias has been termed the researcher degrees of freedom or garden of forking paths in the data analysis. For any set of data, there are many different options for its analysis: some results might be considered outliers and discarded, assumptions are made on error distributions and appropriate test statistics, different covariates might be included into a regression model. Often, multiple hypotheses are investigated and tested, and analyses are done separately on various (overlapping) subgroups. Hypotheses formed after looking at the data require additional care in their interpretation; almost never will \(p\) -values for these ad hoc or post hoc hypotheses be statistically justifiable. Many different measured response variables invite fishing expeditions , where patterns in the data are sought without an underlying hypothesis. Only reporting those sub-analyses that gave ‘interesting’ findings invariably leads to biased conclusions and is called cherry-picking or \(p\) -hacking (or much less flattering names).

The statistical analysis is always part of a larger scientific argument and we should consider the necessary computations in relation to building our scientific argument about the interpretation of the data. In addition to the statistical calculations, this interpretation requires substantial subject-matter knowledge and includes (many) non-statistical arguments. Two quotes highlight that experiment and analysis are a means to an end and not the end in itself.

There is a boundary in data interpretation beyond which formulas and quantitative decision procedures do not go, where judgment and style enter. ( Abelson 1995 )
Often, perfectly reasonable people come to perfectly reasonable decisions or conclusions based on nonstatistical evidence. Statistical analysis is a tool with which we support reasoning. It is not a goal in itself. ( Bailar III 1981 )

There is often a grey area between exploiting researcher degrees of freedom to arrive at a desired conclusion, and creative yet informed analyses of data. One way to navigate this area is to distinguish between exploratory studies and confirmatory studies . The former have no clearly stated scientific question, but are used to generate interesting hypotheses by identifying potential associations or effects that are then further investigated. Conclusions from these studies are very tentative and must be reported honestly as such. In contrast, standards are much higher for confirmatory studies, which investigate a specific predefined scientific question. Analysis plans and pre-registration of an experiment are accepted means for demonstrating lack of bias due to researcher degrees of freedom, and separating primary from secondary analyses allows emphasizing the main goals of the study.

Analysis Plan

The analysis plan is written before conducting the experiment and details the measurands and estimands, the hypotheses to be tested together with a power and sample size calculation, a discussion of relevant effect sizes, detection and handling of outliers and missing data, as well as steps for data normalization such as transformations and baseline corrections. If a regression model is required, its factors and covariates are outlined. Particularly in biology, handling measurements below the limit of quantification and saturation effects require careful consideration.

In the context of clinical trials, the problem of estimands has become a recent focus of attention. An estimand is the target of a statistical estimation procedure, for example the true average difference in enzyme levels between the two preparation kits. A main problem in many studies are post-randomization events that can change the estimand, even if the estimation procedure remains the same. For example, if kit B fails to produce usable samples for measurement in five out of ten cases because the enzyme level was too low, while kit A could handle these enzyme levels perfectly fine, then this might severely exaggerate the observed difference between the two kits. Similar problems arise in drug trials, when some patients stop taking one of the drugs due to side-effects or other complications.

Registration

Registration of experiments is an even more severe measure used in conjunction with an analysis plan and is becoming standard in clinical trials. Here, information about the trial, including the analysis plan, procedure to recruit patients, and stopping criteria, are registered in a public database. Publications based on the trial then refer to this registration, such that reviewers and readers can compare what the researchers intended to do and what they actually did. Similar portals for pre-clinical and translational research are also available.

1.6 Notes and Summary

The problem of measurements and measurands is further discussed for statistics in Hand ( 1996 ) and specifically for biological experiments in Coxon, Longstaff, and Burns ( 2019 ) . A general review of methods for handling missing data is Dong and Peng ( 2013 ) . The different roles of randomization are emphasized in Cox ( 2009 ) .

Two well-known reporting guidelines are the ARRIVE guidelines for animal research ( Kilkenny et al. 2010 ) and the CONSORT guidelines for clinical trials ( Moher et al. 2010 ) . Guidelines describing the minimal information required for reproducing experimental results have been developed for many types of experimental techniques, including microarrays (MIAME), RNA sequencing (MINSEQE), metabolomics (MSI) and proteomics (MIAPE) experiments; the FAIRSHARE initiative provides a more comprehensive collection ( Sansone et al. 2019 ) .

The problems of experimental design in animal experiments and particularly translation research are discussed in Couzin-Frankel ( 2013 ) . Multi-center studies are now considered for these investigations, and using a second laboratory already increases reproducibility substantially ( Richter et al. 2010 ; Richter 2017 ; Voelkl et al. 2018 ; Karp 2018 ) and allows standardizing the treatment effects ( Kafkafi et al. 2017 ) . First attempts are reported of using designs similar to clinical trials ( Llovera and Liesz 2016 ) . Exploratory-confirmatory research and external validity for animal studies is discussed in Kimmelman, Mogil, and Dirnagl ( 2014 ) and Pound and Ritskes-Hoitinga ( 2018 ) . Further information on pilot studies is found in Moore et al. ( 2011 ) , Sim ( 2019 ) , and Thabane et al. ( 2010 ) .

The deliberate use of statistical analyses and their interpretation for supporting a larger argument was called statistics as principled argument ( Abelson 1995 ) . Employing useless statistical analysis without reference to the actual scientific question is surrogate science ( Gigerenzer and Marewski 2014 ) and adaptive thinking is integral to meaningful statistical analysis ( Gigerenzer 2002 ) .

In an experiment, the investigator has full control over the experimental conditions applied to the experiment material. The experimental design gives the logical structure of an experiment: the units describing the organization of the experimental material, the treatments and their allocation to units, and the response. Statistical design of experiments includes techniques to ensure internal validity of an experiment, and methods to make inference from experimental data efficient.

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

experimental design 3 variables

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Statistics and probability

Course: statistics and probability   >   unit 6, introduction to experiment design.

  • Matched pairs experiment design
  • The language of experiments
  • Principles of experiment design
  • Experiment designs
  • Random sampling vs. random assignment (scope of inference)

experimental design 3 variables

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Video transcript

helpful professor logo

The 3 Types Of Experimental Design

The 3 Types Of Experimental Design

Dave Cornell (PhD)

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

Learn about our Editorial Process

The 3 Types Of Experimental Design

Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

experimental design 3 variables

Experimental design refers to a research methodology that allows researchers to test a hypothesis regarding the effects of an independent variable on a dependent variable.

There are three types of experimental design: pre-experimental design, quasi-experimental design, and true experimental design.

Experimental Design in a Nutshell

A typical and simple experiment will look like the following:

  • The experiment consists of two groups: treatment and control.
  • Participants are randomly assigned to be in one of the groups (‘conditions’).
  • The treatment group participants are administered the independent variable (e.g. given a medication).
  • The control group is not given the treatment.
  • The researchers then measure a dependent variable (e.g improvement in health between the groups).

If the independent variable affects the dependent variable, then there should be noticeable differences on the dependent variable between the treatment and control conditions.

The experiment is a type of research methodology that involves the manipulation of at least one independent variable and the measurement of at least one dependent variable.

If the independent variable affects the dependent variable, then the researchers can use the term “causality.”

Types of Experimental Design

1. pre-experimental design.

A researcher may use pre-experimental design if they want to test the effects of the independent variable on a single participant or a small group of participants.

The purpose is exploratory in nature , to see if the independent variable has any effect at all.

The pre-experiment is the simplest form of an experiment that does not contain a control condition.

However, because there is no control condition for comparison, the researcher cannot conclude that the independent variable causes change in the dependent variable.

Examples include:

  • Action Research in the Classroom: Action research in education involves a teacher conducting small-scale research in their classroom designed to address problems they and their students currently face.
  • Case Study Research : Case studies are small-scale, often in-depth, studies that are notusually generalizable.
  • A Pilot Study: Pilot studies are small-scale studies that take place before the main experiment to test the feasibility of the project.
  • Ethnography: An ethnographic research study will involve thick research of a small cohort to generate descriptive rather than predictive results.

2. Quasi-Experimental Design

The quasi-experiment is a methodology to test the effects of an independent variable on a dependent variable. However, the participants are not randomly assigned to treatment or control conditions. Instead, the participants already exist in representative sample groups or categories, such as male/female or high/low SES class.

Because the participants cannot be randomly assigned to male/female or high/low SES, there are limitations on the use of the term “causality.”

Researchers must refrain from inferring that the independent variable caused changes in the dependent variable because the participants existed in already formed categories before the study began.

  • Homogenous Representative Sampling: When the research participant group is homogenous (i.e. not diverse) then the generalizability of the study is diminished.
  • Non-Probability Sampling: When researchers select participants through subjective means such as non-probability sampling, they are engaging in quasi-experimental design and cannot assign causality.
See more Examples of Quasi-Experimental Design

3. True Experimental Design

A true experiment involves a design in which participants are randomly assigned to conditions, there exists at least two conditions (treatment and control) and the researcher manipulates the level of the independent variable (independent variable).

When these three criteria are met, then the observed changes in the dependent variable (dependent variable) are most likely caused by the different levels of the independent variable.

The true experiment is the only research design that allows the inference of causality .

Of course, no study is perfect, so researchers must also take into account any threats to internal validity that may exist such as confounding variables or experimenter bias.

  • Heterogenous Sample Groups: True experiments often contain heterogenous groups that represent a wide population.
  • Clinical Trials: Clinical trials such as those required for approval of new medications are required to be true experiments that can assign causality.
See More Examples of Experimental Design

Experimental Design vs Observational Design

Experimental design is often contrasted to observational design. Defined succinctly, an experimental design is a method in which the researcher manipulates one or more variables to determine their effects on another variable, while observational design involves the observation and analysis of a subject without influencing their behavior or conditions.

Observational design primarily involves data collection without direct involvement from the researcher. Here, the variables aren’t manipulated as they would be in an experimental design.

An example of an observational study might be research examining the correlation between exercise frequency and academic performance using data from students’ gym and classroom records.

The key difference between these two designs is the degree of control exerted in the experiment . In experimental studies, the investigator controls conditions and their manipulation, while observational studies only allow the observation of conditions as independently determined (Althubaiti, 2016).

Observational designs cannot infer causality as well as experimental designs; but they are highly effective at generating descriptive statistics.

Observational DesignExperimental Design
A research approach where the investigator observes without intervening, often in natural settings.A research approach where the investigator manipulates one variable and observes the effect on another variable.
The researcher does not control or manipulate variables, but only observes them as they naturally occur.The researcher has complete control over the variables being studied, including the manipulation of the independent variable.
There is no intervention or manipulation by the researcher.The researcher intentionally introduces an intervention or treatment.
To identify patterns and relationships in naturally occurring data.To determine cause-and-effect relationships between variables.
Observing behaviors in their natural environments, conducting surveys, etc.Conducting a clinical trial to determine the efficacy of a new drug, using a control and treatment group, etc.
Useful when manipulation is unethical or impractical; Can provide rich, real-world data.Can establish causality; Can be controlled for confounding factors.
Cannot establish causality; Potential for confounding variables.May lack ecological validity (real-world application); Can be costly and time-consuming.
Typically collected , but can also be quantitative.Typically collected , but can also be qualitative.

For more, read: Observational vs Experimental Studies

Generally speaking, there are three broad categories of experiments. Each one serves a specific purpose and has associated limitations . The pre-experiment is an exploratory study to gather preliminary data on the effectiveness of a treatment and determine if a larger study is warranted.

The quasi-experiment is used when studying preexisting groups, such as people living in various cities or falling into various demographic categories. Although very informative, the results are limited by the presence of possible extraneous variables that cannot be controlled.

The true experiment is the most scientifically rigorous type of study. The researcher can manipulate the level of the independent variable and observe changes, if any, on the dependent variable. The key to the experiment is randomly assigning participants to conditions. Random assignment eliminates a lot of confounds and extraneous variables, and allows the researchers to use the term “causality.”

For More, See: Examples of Random Assignment

Baumrind, D. (1991). Parenting styles and adolescent development. In R. M. Lerner, A. C. Peterson, & J. Brooks-Gunn (Eds.), Encyclopedia of Adolescence (pp. 746–758). New York: Garland Publishing, Inc.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13 (5), 585–589.

Matthew L. Maciejewski (2020) Quasi-experimental design. Biostatistics & Epidemiology, 4 (1), 38-47. https://doi.org/10.1080/24709360.2018.1477468

Thyer, Bruce. (2012). Quasi-Experimental Research Designs . Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195387384.001.0001

Dave

  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 23 Achieved Status Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Defense Mechanisms Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 15 Theory of Planned Behavior Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 18 Adaptive Behavior Examples

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 23 Achieved Status Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Ableism Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 25 Defense Mechanisms Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Theory of Planned Behavior Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Independent vs. Dependent Variables | Definition & Examples

Independent vs. Dependent Variables | Definition & Examples

Published on February 3, 2022 by Pritha Bhandari . Revised on June 22, 2023.

In research, variables are any characteristics that can take on different values, such as height, age, temperature, or test scores.

Researchers often manipulate or measure independent and dependent variables in studies to test cause-and-effect relationships.

  • The independent variable is the cause. Its value is independent of other variables in your study.
  • The dependent variable is the effect. Its value depends on changes in the independent variable.

Your independent variable is the temperature of the room. You vary the room temperature by making it cooler for half the participants, and warmer for the other half.

Table of contents

What is an independent variable, types of independent variables, what is a dependent variable, identifying independent vs. dependent variables, independent and dependent variables in research, visualizing independent and dependent variables, other interesting articles, frequently asked questions about independent and dependent variables.

An independent variable is the variable you manipulate or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

These terms are especially used in statistics , where you estimate the extent to which an independent variable change can explain or predict changes in the dependent variable.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental design 3 variables

There are two main types of independent variables.

  • Experimental independent variables can be directly manipulated by researchers.
  • Subject variables cannot be manipulated by researchers, but they can be used to group research subjects categorically.

Experimental variables

In experiments, you manipulate independent variables directly to see how they affect your dependent variable. The independent variable is usually applied at different levels to see how the outcomes differ.

You can apply just two levels in order to find out if an independent variable has an effect at all.

You can also apply multiple levels to find out how the independent variable affects the dependent variable.

You have three independent variable levels, and each group gets a different level of treatment.

You randomly assign your patients to one of the three groups:

  • A low-dose experimental group
  • A high-dose experimental group
  • A placebo group (to research a possible placebo effect )

Independent and dependent variables

A true experiment requires you to randomly assign different levels of an independent variable to your participants.

Random assignment helps you control participant characteristics, so that they don’t affect your experimental results. This helps you to have confidence that your dependent variable results come solely from the independent variable manipulation.

Subject variables

Subject variables are characteristics that vary across participants, and they can’t be manipulated by researchers. For example, gender identity, ethnicity, race, income, and education are all important subject variables that social researchers treat as independent variables.

It’s not possible to randomly assign these to participants, since these are characteristics of already existing groups. Instead, you can create a research design where you compare the outcomes of groups of participants with characteristics. This is a quasi-experimental design because there’s no random assignment. Note that any research methods that use non-random assignment are at risk for research biases like selection bias and sampling bias .

Your independent variable is a subject variable, namely the gender identity of the participants. You have three groups: men, women and other.

Your dependent variable is the brain activity response to hearing infant cries. You record brain activity with fMRI scans when participants hear infant cries without their awareness.

A dependent variable is the variable that changes as a result of the independent variable manipulation. It’s the outcome you’re interested in measuring, and it “depends” on your independent variable.

In statistics , dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

The dependent variable is what you record after you’ve manipulated the independent variable. You use this measurement data to check whether and to what extent your independent variable influences the dependent variable by conducting statistical analyses.

Based on your findings, you can estimate the degree to which your independent variable variation drives changes in your dependent variable. You can also predict how much your dependent variable will change as a result of variation in the independent variable.

Distinguishing between independent and dependent variables can be tricky when designing a complex study or reading an academic research paper .

A dependent variable from one study can be the independent variable in another study, so it’s important to pay attention to research design .

Here are some tips for identifying each variable type.

Recognizing independent variables

Use this list of questions to check whether you’re dealing with an independent variable:

  • Is the variable manipulated, controlled, or used as a subject grouping method by the researcher?
  • Does this variable come before the other variable in time?
  • Is the researcher trying to understand whether or how this variable affects another variable?

Recognizing dependent variables

Check whether you’re dealing with a dependent variable:

  • Is this variable measured as an outcome of the study?
  • Is this variable dependent on another variable in the study?
  • Does this variable get measured only after other variables are altered?

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Independent and dependent variables are generally used in experimental and quasi-experimental research.

Here are some examples of research questions and corresponding independent and dependent variables.

Research question Independent variable Dependent variable(s)
Do tomatoes grow fastest under fluorescent, incandescent, or natural light?
What is the effect of intermittent fasting on blood sugar levels?
Is medical marijuana effective for pain reduction in people with chronic pain?
To what extent does remote working increase job satisfaction?

For experimental data, you analyze your results by generating descriptive statistics and visualizing your findings. Then, you select an appropriate statistical test to test your hypothesis .

The type of test is determined by:

  • your variable types
  • level of measurement
  • number of independent variable levels.

You’ll often use t tests or ANOVAs to analyze your data and answer your research questions.

In quantitative research , it’s good practice to use charts or graphs to visualize the results of studies. Generally, the independent variable goes on the x -axis (horizontal) and the dependent variable on the y -axis (vertical).

The type of visualization you use depends on the variable types in your research questions:

  • A bar chart is ideal when you have a categorical independent variable.
  • A scatter plot or line graph is best when your independent and dependent variables are both quantitative.

To inspect your data, you place your independent variable of treatment level on the x -axis and the dependent variable of blood pressure on the y -axis.

You plot bars for each treatment group before and after the treatment to show the difference in blood pressure.

independent and dependent variables

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Independent vs. Dependent Variables | Definition & Examples. Scribbr. Retrieved August 16, 2024, from https://www.scribbr.com/methodology/independent-and-dependent-variables/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, guide to experimental design | overview, steps, & examples, explanatory and response variables | definitions & examples, confounding variables | definition, examples & controls, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Logo

Three Principles of Experimental Designs

by Kim Love   1 Comment

Stage 2

Yes, absolutely! Understanding experimental design can help you recognize the questions you can and can’t answer with the data. It will also help you identify possible sources of bias that can lead to undesirable results. Finally, it will help you provide recommendations to make future studies more efficient.

The Three Rs of Experimental Design

An experiment involves one or more treatments, each with two or more conditions. The defining characteristic of an experiment is that the researcher is able to assign subjects to treatment groups.

There are three principles that underlie any experiment. These are often called the three Rs of experimental design , and they are:

Randomization

Replication.

  • Reduction of variance

Let’s look at each principle in the context of a specific experiment.

experimental design 3 variables

Randomization is the assignment of the subjects in the study to treatment groups in a random way. This is one of the most important aspects of an experiment.

It ensures that the only systematic difference in groups is the treatment condition. In the training experiment, this would mean that any difference in the outcomes between the two groups is due to the training. In other words, random assignment allows you to demonstrate causation.

Suppose the researcher did not randomize, and assigned men to one group and women to the other group. It should be clear that we won’t know if differences between the treatment groups come from gender or training.

Although in this example our confounding variable , gender, is obvious, that’s not always true. Randomization is the only sure way to avoid accidental confounding and its resulting bias.

Replication refers to having multiple subjects in each group. The more subjects in each group, the easier to determine whether any differences between the groups are due to the treatment and not the characteristics of individuals in the groups.

Suppose the training study had limited resources. Would it be enough to recruit only two people, and compare their times after training? Again, it’s probably obvious you can’t do this. The difference in outcomes would depend as much on those two people as it would on the training method.

There are many considerations that go into determining sample size . Generally, though, more subjects per group means more statistical confidence in the outcomes. Too few subjects in a group makes it very hard to find differences in the outcomes between treatment groups.

Reduction of Variance

Reduction of variance refers to removing or accounting for systematic difference among subjects. This allows you to measure the differences due to the treatment more precisely. There are multiple ways to approach this.

One way is to limit the population of the study so the subjects are more similar. Another way is to incorporate covariates into the analysis. These are variables outside of the experimental design that you can measure.

A third way is blocking. This refers to identifying related subjects and randomly assigning them to different treatments.

In the training experiment, not accounting for gender could make it more difficult to estimate the effects of training. There are at least three ways to account for it in the design and data collection.

  • Only include one gender in the study, and limit the results of the study to that one gender.
  • Measure the participants’ gender and include it in the study as a covariate.
  • Include gender in the experimental design as a block. Randomly assign men and women in equal number to the two groups, and include gender in the analysis.

Application to Analysis

Although there are many types of experimental designs, the three Rs are at the heart of each of them. Advanced experimental designs simply achieve these under complicated circumstances. Understanding these principles, even without advanced knowledge, will make you a better analyst.

Always answer the following questions any time you are analyzing experimental data:

1. How was randomization applied in the experiment? This will help you understand whether you can draw causal conclusions. It will also help you recognize if the general conclusions of the study could be biased.

2. How much replication was there in the experiment? If the number of subjects was small (overall or in certain groups), this could result in a lack of findings.

3. Was variability outside of the scope of the experiment appropriately reduced? If the researcher can account for outside factors in the future, this will make the experiment more efficient.

experimental design 3 variables

Reader Interactions

' src=

July 19, 2022 at 3:02 am

Hi Kim, Thanks for this nice article,

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Privacy Overview

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

axioms-logo

Article Menu

experimental design 3 variables

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Neutrosophic analysis of experimental data using neutrosophic graeco-latin square design, 1. introduction, 2. methods neutrosophic graeco-latin square design, 2.1. neutrosophic graeco-latin square design model, 2.2. calculation of sum of squares, 2.3. hypothesis tests for the treatments, row, and column effects, 2.4. confidence intervals for the treatment mean differences, 3. illustration: description of the experiment, 4.1. summary statistics, 4.2. hypotheses tests, 5. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

  • Euler, L. Recherches sur une nouvelle espèce de carrés magiques. Verh. Zeeuw. Gen. Weten. Vlissengen 1782 , 9 , 85–239. [ Google Scholar ]
  • Klyve, D.; Lee Stemkoski, L. Graeco-Latin Squares and a Mistaken Conjecture of Euler. Coll. Math. J. 2006 , 37 , 2–15. [ Google Scholar ] [ CrossRef ]
  • Dénes, J.; Keedwell, A.D. Latin Squares and Their Applications ; Academic: New York, NY, USA, 1974. [ Google Scholar ]
  • Bose, R.C.; Shrikhande, S.S.; Parker, E.T. Further results on the construction of mutually orthogonal Latin squares and the falsity of Euler’s conjecture. Can. J. Math. 1960 , 12 , 189–203. [ Google Scholar ] [ CrossRef ]
  • Fisher, R.A.; Yates, F. Statistical Tables for Biological, Agricultural and Medical Research , 6th ed.; Hafner (Macmillan): New York, NY, USA, 1963. [ Google Scholar ]
  • Dodge, Y.; Shah, K.R. Estimation of parameters in latin squares and graeco-latin squares with missing observations. Commun. Stat. Theory Methods 1977 , 6 , 1465–1472. [ Google Scholar ] [ CrossRef ]
  • Bose, R.C.; Shrikhande, S.S.; Bhattacharya, K. On the Construction of Pairwise Orthogonal Latin Squares and the Falsity of a Conjecture of EULER ; Mimeo. Series No. 222; University of North Carolina, Institute of Statistics: Chapel Hill, NC, USA, 1953. [ Google Scholar ]
  • Bailey, R.A. Quasi-complete Latin squares: Construction and randomization. J. R. Stat. Soc. Ser. B 1984 , 46 , 323–334. [ Google Scholar ] [ CrossRef ]
  • Hedayat, A. Self orthogonal Latin square designs and their importance. Biometrics 1973 , 29 , 393–396. [ Google Scholar ] [ CrossRef ]
  • Box, G.E.P.; Hunter, W.G.; Hunter, J.S. Statistics for Experimenters , 2nd ed.; Wiley: New York, NY, USA, 2005. [ Google Scholar ]
  • Cochran, W.G.; Cox, G.M. Experimental Designs , 2nd ed.; Wiley: New York, NY, USA, 1957. [ Google Scholar ]
  • Martin, R.J.; Nadarajah, S. Graeco–Latin Square Designs. In Encyclopedia of Biostatistics ; John Wiley & Sons: Hoboken, NJ, USA, 2005. [ Google Scholar ]
  • Montgomery, D.C. Design and Analysis of Experiments , 8th ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2013. [ Google Scholar ]
  • Smarandache, F. Neutrosophic set, a generalization of the intuitionistic fuzzy sets. Int. J. Pure Appl. Math. 2005 , 24 , 287–297. [ Google Scholar ]
  • Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic Probability and Statistics , 6th ed.; Info Learn Quest: Ann Arbor, MI, USA, 2007. [ Google Scholar ]
  • Smarandache, F. Introduction to Neutrosophic Statistics ; Sitech & Education Publishing: Columbus, OH, USA, 2014. [ Google Scholar ]
  • Smarandache, F. Neutrosophic Statistics vs. Classical Statistics. In Nidus Idearum/Superluminal Physics , 3rd ed.; 2019; Volume 7, Available online: http://fs.unm.edu/NidusIdearum7-ed3.pdf (accessed on 19 July 2024).
  • Little, R.J.; Rubin, D.B. Statistical Analysis with Missing Data ; John Wiley & Sons: Hoboken, NJ, USA, 2019; Volume 793. [ Google Scholar ] [ CrossRef ]
  • Yang, Y.; Yuan, Y.; Zhang, G.; Wang, H.; Chen, Y.C.; Liu, Y.; Tarolli, C.G.; Crepeau, D.; Bukartyk, J.; Junna, M.R.; et al. Artificial intelligence-enabled detection and assessment of Parkinson’s disease using nocturnal breathing signals. Nat. Med. 2022 , 28 , 2207–2215. [ Google Scholar ] [ CrossRef ] [ PubMed ]

Click here to enlarge figure

Operators→
Raw Material↓
12345
( )
1ABCDE
2BCDEA
3CDEAB
4DEABC
5EABCD
( )
1αγεβδ
2βδαγε
3γεβδα
4δαγεβ
5εβδαγ
( )
1
2
3
4
5
Operators→
Raw Material↓
12345
1Aα = [−0.99, −1.01]Bγ = [−4.95, −5.05]Cε = [−5.94, −6.06]Dβ = [−0.99, −1.01]Eδ = [−0.99, −1.01]
2Bβ = [−7.92, −8.08]Cδ = [−0.99, −1.01] Dα = [4.95, 5.05]Eγ = [1.98, 2.02]Aε = [10.89, 11.11]
3Cγ = [−6.93, −7.07]Dε = [12.87, 13.13]Eβ = [0.99, 1.01]Aδ = [1.98, 2.02] Bα = [−3.96, −4.04]
4Dδ = [0.99, 1.01] Eα = [5.94, 6.06]Aγ = [0.99, 1.01]Bε = [−1.98, −2.02]Cβ = [−2.97, −3.03]
5Eε = [−2.97, −3.03]Aβ = [4.95, 5.05]Bδ = [−4.95, −5.05] Cα = [3.96, 4.04]Dγ = [5.94, 6.06]
[−17.82, −18.18][17.82, 18.18][−3.96, −4.04][4.95, 5.05][8.91, 9.09]
FormulationsABCDE
Mean[−2.828, −2.772][−4.848, −4.752][−2.626, −2.574][4.752, 4.848][0.99, 1.01]
Effect[−4.808, −4.792][−6.828, −6.772][−4.606, −4.594][2.732, 2.868][−1.03, −0.97]
Assemblies δΕ
Mean[1.98, 2.02][−1.212, −1.188][−0.606, −0.594][−0.808, −0.792][2.574, 2.626]
Effect[0, 0.04][−3.192, −3.168][−2.586, −2.574][−2.788, −2.772][0.594, 0.646]
OperatorsO O O O O
Mean[−3.636, −3.564][3.564, 3.636][−0.808, −0.782][0.99, 1.01][1.782, 1.818]
Effect[−4.032, 3.968][3.16, 3.24][−1.204, −1.196][0.586, 0.614][1.378, 1.422]
Raw MaterialRM RM RM RM RM
Mean[−2.828, −2.772][1.782, 1.818][0.99, 1.01][0.594, 0.606][1.386, 1.414]
Effect[−3.224, −3.176][1.378, 1.422][0.586, 0.614][0.19, 0.21][0.982, 1.018]
SourceDFSSF(4, 8)p-Value
Formulation4[323.273, 336.793][6.988, 18.939][0.0004, 0.0101]
Assemblies4[60.606, 63.406][1.310, 3.566][0.0594, 0.3443]
Raw Material4[66.487, 69.527][1.437, 3.910][0.0478, 0.3064]
Operator4[146.855, 157.095][3.174, 8.834][0.0049, 0.0771]
Error8[35.566, 92.527]
Total24[662.388, 689.748]
SourceDFSSF(4, 8)p-Value
Formulation4330.03310.306043270.00303
Assemblies462.0061.9362806710.19779
Raw Material468.0072.1236757670.16933
Operator4151.9754.7457706510.02947
Error864.0465
Total24676.068
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Kumar, P.; Moazzamigodarzi, M.; Rahimi, M. Neutrosophic Analysis of Experimental Data Using Neutrosophic Graeco-Latin Square Design. Axioms 2024 , 13 , 559. https://doi.org/10.3390/axioms13080559

Kumar P, Moazzamigodarzi M, Rahimi M. Neutrosophic Analysis of Experimental Data Using Neutrosophic Graeco-Latin Square Design. Axioms . 2024; 13(8):559. https://doi.org/10.3390/axioms13080559

Kumar, Pranesh, Mahdieh Moazzamigodarzi, and Mohamadtaghi Rahimi. 2024. "Neutrosophic Analysis of Experimental Data Using Neutrosophic Graeco-Latin Square Design" Axioms 13, no. 8: 559. https://doi.org/10.3390/axioms13080559

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

In silico and experimental characterization of a new polyextremophilic subtilisin-like protease from Microbacterium metallidurans and its application as a laundry detergent additive

  • Original Article
  • Published: 12 August 2024
  • Volume 14 , article number  200 , ( 2024 )

Cite this article

experimental design 3 variables

  • Afwa Gorrab 1 ,
  • Rania Ouertani 1 ,
  • Khouloud Hammami 1 ,
  • Amal Souii 1 ,
  • Fatma Kallel 2 ,
  • Ahmed Slaheddine Masmoudi 1 ,
  • Ameur Cherif 1 &
  • Mohamed Neifar   ORCID: orcid.org/0000-0001-6279-2769 2 , 3  

26 Accesses

Explore all metrics

Considering the current growing interest in new and improved enzymes for use in a variety of applications, the present study aimed to characterize a novel detergent-stable serine alkaline protease from the extremophilic actinobacterium Microbacterium metallidurans TL13 (MmSP) using a combined in silico and experimental approach. The MmSP showed a close phylogenetic relationship with high molecular weight S8 peptidases of Microbacterium species. Moreover, its physical and chemical parameters computed using Expasy’s ProtParam tool revealed that MmSP is hydrophilic, halophilic and thermo-alkali stable. 3D structure modelling and functional prediction of TL13 serine protease resulted in the detection of five characteristic domains: [catalytic subtilase domain, fibronectin (Fn) type-III domain, peptidase inhibitor I9, protease-associated (PA) domain and bacterial Ig-like domain (group 3)], as well as the three amino acid residues [aspartate (D182), histidine (H272) and serine (S604)] in the catalytic subtilase domain. The extremophilic strain TL13 was tested for protease production using agricultural wastes/by-products as carbon substrates. Maximum enzyme activity (390 U/gds) was obtained at 8th day fermentation on potato peel medium. Extracellular extract was concentrated and partially purified using ammonium sulfate precipitation methodology (1.58 folds purification fold). The optimal pH, temperature and salinity of MmSP were 9, 60 °C and 1 M NaCl, respectively. The MmSP protease showed broad pH stability, thermal stability, salt tolerance and detergent compatibility. In order to achieve the maximum stain removal efficacy by the TL 13 serine protease, the operation conditions were optimized using a Box–Behnken Design (BBD) with four variables, namely, time (15–75 min), temperature (30–60 °C), MmSP enzyme concentration (5–10 U/mL) and pH (7–11). The maximum stain removal yield (95 ± 4%) obtained under the optimal enzymatic operation conditions (treatment with 7.5 U/mL of MmSP during 30 min at 32 °C and pH9) was in good agreement with the value predicted by the regression model (98 ± %), which prove the validity of the fitted model. In conclusion, MmSP appears to be a good candidate for industrial applications, particularly in laundry detergent formulations, due to its high hydrophilicity, alkali-halo-stability, detergent compatibility and stain removal efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

experimental design 3 variables

Similar content being viewed by others

Catalytic role of thermostable metalloproteases from bacillus subtilis kt004404 as dehairing and destaining agent.

experimental design 3 variables

Heterologous Expression and Structural Elucidation of a Highly Thermostable Alkaline Serine Protease from Haloalkaliphilic Actinobacterium, Nocardiopsis sp. Mit-7

experimental design 3 variables

Purification and biochemical characterization of a novel thermostable and halotolerant subtilisin SAPN, a serine protease from Melghiribacillus thermohalophilus Nari2AT for chitin extraction from crab and shrimp shell by-products

Data availability.

All available data are already reflected in the manuscript, other relevant data may be available upon on request from the corresponding author.

Abidi F, Liman F, Nejib M (2008) Production of alkaline proteases by Botrytis cinerea using economic raw materials: assay as biodetergent. Process Biochem 43:1202–1208. https://doi.org/10.1016/j.procbio.2008.06.018

Article   CAS   Google Scholar  

Al-Dhuayan I, Kotb E, Alqosaibi A, Mahmoud A (2021) Histological studies on a newly isolated Bacillus subtilis D10 protease in the debridement of burn wound eschars using mouse model. Pharmaceutics 13:923. https://doi.org/10.3390/pharmaceutics13070923

Article   CAS   PubMed   PubMed Central   Google Scholar  

Arabacı N, Karaytuğ T (2023) Alkaline thermo- and oxidant-stable protease from Bacillus pumilus strain TNP93: laundry detergent formulations. Indian J Microbiol 63(4):575–587. https://doi.org/10.1007/s12088-023-01115-3

Article   CAS   PubMed   Google Scholar  

Asitok A, Ekpenyong M, Takon I, Antai S, Ogarekpe N, Antigha R, Edet P, Ben U, Akpan A, Antai A, Essien J (2022) Overproduction of a thermo-stable halo-alkaline protease on agro-waste-based optimized medium through alternate combinatorial random mutagenesis of Stenotrophomonas acidaminiphila . Biotechnol Rep (amst) 35:e00746. https://doi.org/10.1016/j.btre.2022.e00746

Bhatt HB, Singh SP (2020) Cloning, expression, and structural elucidation of a biotechnologically potential alkaline serine protease from a newly isolated haloalkaliphilic Bacillus lehensis JO-26. Front Microbiol 11:941. https://doi.org/10.3389/fmicb.2020.00941

Article   PubMed   PubMed Central   Google Scholar  

Blanco AS, Durive OP, Pérez SB, Montes ZD, Guerra NP (2016) Simultaneous production of amylases and proteases by Bacillus subtilis in brewery wastes. Braz J Microbiol 47:665–674. https://doi.org/10.1016/j.bjm.2016.04.019

Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72(1–2):248–254. https://doi.org/10.1016/0003-2697(76)90527-3

Cavello A, Hours R, Cavalitto S (2012) Bioprocessing of “hair waste” by Paecilomyces lilacinus as a source of a bleach-stable, alkaline, and thermostable keratinase with potential application as a laundry detergent. Additive: characterization and wash performance analysis. Biotechnol Res Inter 2:1–12. https://doi.org/10.1155/2012/369308

Clarridge JE (2004) Impact of 16S rRNA gene sequence analysis for identification of bacteria on clinical microbiology and infectious diseases. Clin Microbiol Rev 17:840–862. https://doi.org/10.1128/CMR.17.4.840-862.2004

Contesini FJ, Melo RR, Sato HH (2017) An overview of Bacillus proteases : from production to application. Crit Rev Biotechnol 38:321–334. https://doi.org/10.1080/07388551.2017.1354354

Falkenberg F, Bott M, Bongaerts J, Siegert P (2022a) Phylogenetic survey of the subtilase family and a data-mining-based search for new subtilisins from Bacillaceae . Manuscript submitted for publication. Front Microbiol 13:1017978. https://doi.org/10.3389/fmicb.2022.1017978

Falkenberg F, Rahba J, Fischer D, Bott M, Bongaerts J, Siegert P (2022b) Biochemical characterization of a novel oxidatively stable, halotolerant, and high-alkaline subtilisin from Alkalihalobacillus okhensis Kh10-101T. FEBS Open Bio 12:1729–1746. https://doi.org/10.1002/2211-5463.13457

Falkenberg F, Voß L, Bott M, Bongaerts J, Siegert P (2023) New robust subtilisins from halotolerant and halophilic Bacillaceae. Appl Microbiol Biotechnol 107(12):3939–3954. https://doi.org/10.1007/s00253-023-12553-w

Fath M, Fazaelipoor MH (2015) Production of proteases in a novel trickling tray bioreactor. Waste Biomass Valor 6:475–480. https://doi.org/10.1007/s12649-015-9371-6

Ferreira L, Ramos M, Dordick J, Gil M (2003) Influence of different silica derivatives in the immobilization and stabilization of a Bacillus licheniformis protease (subtilisin Carlsberg). J Mol Catal B Enzym 21:189–199. https://doi.org/10.1016/S1381-1177(02)00223-0

Geng C, Nie X, Tang ZZ, Lin Y, Sun J, Peng M, D, (2016) A novel serine protease, Sep1, from Bacillus firmus DS-1 has nematicidal activity and degrades multiple intestinal-associated nematode proteins. Sci Rep 6:25012. https://doi.org/10.1038/srep25012

Gurunathan R, Huang B, Ponnusamy VK, Hwang J-S, Dahms H-U (2021) Novel recombinant keratin degrading subtilisin like serine alkaline protease from Bacillus cereus isolated from marine hydrothermal vent crabs. Sci Rep 11:12007. https://doi.org/10.1038/s41598-021-90375-4

Hashmi S, Iqbal S, Ahmed I, Janjua HA (2022) Production, optimization, and partial purification of alkali-thermotolerant proteases from newly isolated Bacillus subtilis S1 and Bacillus amyloliquefaciens KSM12. Processes 10:1050. https://doi.org/10.3390/pr10061050

Hmidet N, El-Hadj Ali N, Haddar A, Kanoun S, Alya S-K, Nasri M (2009) Alkaline proteases and thermostable α-amylase co-produced by Bacillus licheniformis NH1: characterization and potential application as detergent additive. Biochem Eng J 47(1–3):71–79. https://doi.org/10.1016/j.bej.2009.07.005

Ibrahim ASS, Al-Salamah AA, El-Badawi YB, El-Tayeb MA, Antranikian G (2015) Detergent-, solvent- and salt-compatible thermoactive alkaline serine protease from halotolerant alkaliphilic Bacillus sp. NPST-AK15: purification and characterization. Extremophiles 19(5):961–971. https://doi.org/10.1007/s00792-015-0771-0

Iqbal I, Aftab MN, Afzal MS, Zafar A (2020) Kaleem characterization of Geobacillus stearothermophilus protease for detergent industry. Revista Mexicana De Ingeniería Química 19(1):267–279

Jain D, Pancha I, Mishra SK, Shrivastav A, Mishra S (2012) Purification and characterization of haloalkaline thermoactive, solvent stable and SDS-induced protease from Bacillus sp.: a potential additive for laundry detergents. Bioresource Technol 115:228–236. https://doi.org/10.1016/j.biortech.2011.10.081

Jellouli K, Ghorbel-Bellaaj O, Ayed HB, Manni L, Agrebi R, Nasri M (2011) Alkalineprotease from Bacillus licheniformis MP1: purification, characterization and potential application as a detergent additive and for shrimp waste deproteinization. Process Biochem 46:1248–1256. https://doi.org/10.1016/j.procbio.2011.02.012

Joshi S, Satyanarayana T (2013) Characteristics and applications of a recombinant alkaline serine protease from a novel bacterium Bacillus lehensis . Bioresour Technol 131:76–85. https://doi.org/10.1016/j.biortech.2012.12.124

Joshi N, Kocher GS, Kalia A, Banga HS (2020) Development of nano-silver alkaline protease bio-conjugate depilating eco-benign formulation by utilizing potato peel based medium. Int J Biol Macromol 152:261–271. https://doi.org/10.1016/j.ijbiomac.2020.02.251

Kazan D, Denizci AA, Oner MNK, Erarslan A (2005) Purification and characterization of a serine alkaline protease from Bacillus clausii GMBAE 42. J Ind Microbiol Biotechnol 32:335–344. https://doi.org/10.1007/s10295-005-0260-z

Kembhavi AA, Kulkarni A, Pant A (1993) Salt-tolerant and thermostable alkaline protease from Bacillus subtilis NCIM no. 64. Appl Biochem Biotechnol 38(1–2):83–92. https://doi.org/10.1007/BF02916414

Laemmli UK (1970) Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature 227(5259):680–685. https://doi.org/10.1038/227680a0

Madala PK, Tyndall JD, Nall T, Fairlie DP (2010) Update 1 of: proteases universally recognize beta strands in their active sites. Chem Rev 110:PR1–PR31. https://doi.org/10.1021/cr900368a

Mahakhan P, Apiso P, Srisunthorn K, Vichitphan K, Vichitphan S, Punyauppa-Path S, Sawaengkaew J (2023) Alkaline protease production from Bacillus gibsonii 6BS15–4 using dairy effluent and its characterization as a laundry detergent additive. J Microbiol Biotechnol 33(2):195–202. https://doi.org/10.4014/jmb.2210.10007

Mahmoud A, Kotb E, Alqosaibi AI, Al-Karmalawy AA, Al-Dhuayan IS, Alabkari H (2021) In vitro and in silico characterization of alkaline serine protease from Bacillus subtilis D9 recovered from Saudi Arabia. Heliyon 7(10):e08148. https://doi.org/10.1016/j.heliyon.2021.e08148

Manavalan T, Manavalan A, Ramachandran S, Heese K (2020) Identification of a novel thermostable alkaline protease from Bacillus megaterium -TK1 for the detergent and leather industry. Biology (basel) 9(12):472. https://doi.org/10.3390/biology9120472

Mathieu D, Nony J, Phan-Than-Luu R (2000). NEMROD-W software. LPRAI: Marseille

Matkawala F, Nighojkar S, Kumar A, Nighojkar A (2019) Enhanced production of alkaline protease by Neocosmospora sp. N1 using custard apple seed powder as inducer and its application for stain removal and dehairing. Biocat Agric Biotech 21:101310. https://doi.org/10.1016/j.bcab.2019.101310

Article   Google Scholar  

Matrawy AA, Khalil AI, Marey HS, Embaby AM (2021a) Use of wheat straw for value-added product xylanase by Penicillium chrysogenum strain A3 DSM105774. J Fungi 7(9):696. https://doi.org/10.3390/jof7090696

Matrawy AA, Khalil AI, Marey HS, Embaby AM (2021b) Bio-valorization of the raw agro-industrial waste rice husk through directed production of xylanase by Thermomyces lanuginosus strain A3–1 DSM 105773: a statistical sequential model. Biomass Convers Biorefin 11:2177–2189. https://doi.org/10.1007/s13399-020-00824-9

Mechri S, Bouacem K, Zaraî Jaouadi N, Rekik H, Ben Elhoul M, Omrane Benmrad M, Hacene H, Bejar S, Bouanane-Darenfed A, Jaouadi B (2019) Identification of a novel protease from the thermophilic Anoxybacillus kamchatkensis M1V and its application as laundry detergent additive. Extremophiles 23(6):687–706. https://doi.org/10.1007/s00792-019-01123-6

Mokashe N, Chaudhari B, Patil U (2018) Operative utility of salt-stable proteases of halophilic and halotolerant bacteria in the biotechnology sector. Int J Biol Macromol 117:493–522. https://doi.org/10.1016/j.ijbiomac.2018.05.217

Neifar M, Jaouani A, Kamoun A, Ghorbel RE, Chaabouni SE (2011) Decolourization of solophenyl red 3BL polyazo dye by laccasemediator system: optimization through response surface methodology. Enzyme Res 2011:179050. https://doi.org/10.4061/2011/179050

Ouertani R, Ouertani A, Mahjoubi M, Bousselmi Y, Najjari A, Cherif H, Chamkhi A, Mosbah A, Khdhira H, Sghaier H, Chouchane H, Cherif A, Neifar M (2020) New plant growth-promoting, chromium-detoxifying Microbacterium species isolated from a tannery wastewater: performance and genomic insights. Front Bioeng Biotechnol 8:521. https://doi.org/10.3389/fbioe.2020.00521

Pant G (2015) Production, optimization and partial purification of protease from Bacillus subtilis . J Taibah Univ Sci 9:50–55. https://doi.org/10.1016/j.jtusci.2014.04.010

Phrommao E, Yongsawatdigul J, Rodtong S, Yamabhai M (2011) A novel subtilase with NaCl-activated and oxidant-stable activity from Virgibacillus sp SK37. BMC Biotechnol. https://doi.org/10.1186/1472-6750-11-65

Pradeep NV, Anupama, Vidyashree KG, Lakshmi P (2012) In silico characterization of industrial important cellulases using computational tools. Adv Life Sci Technol. 4:2012 https://www.researchgate.net/publication/260912401

Rai SK, Mukherjee AK (2010) Statistical optimization of production, purification and industrial application of a laundry detergent and organic solvent-stable subtilisin-like serine protease (Alzwiprase) from Bacillus subtilis DM-04. Biochem Eng J 48(2):173–180. https://doi.org/10.1016/j.bej.2009.09.007

Raveendran S, Parameswaran B, Beevi US (2018) Applications of microbial enzymes in food industry. Food Technol Biotechnol 56:16–30

Rawlings ND (2013) Identification and prioritization of novel uncharacterized peptidases for biochemical characterization. J Biol Databases 2013:022. https://doi.org/10.1093/database/bat022

Rawlings ND, Barrett AJ, Thomas PD, Huang X, Bateman A, Finn RD (2018) The MEROPS database of proteolytic enzymes, their substrates and inhibitors in 2017 and a comparison with peptidases in the PANTHER database. Nucleic Acids Res 46:D624–D632. https://doi.org/10.1093/nar/gkx1134

Razzaq A, Shamsi S, Ali A, Ali Q, Sajjad M (2019) Microbial proteases applications. Front Bioeng Biotechnol 7:1–10. https://doi.org/10.3389/fbioe.2019.00110

Saeki K, Hitomi J, Okuda M, Hatada Y, Kageyama Y, Takaiwa M, Kubota H, Hagihara H, Kobayashi T, Kawai S, Ito S (2002) A novel species of alkaliphilic Bacillus that produces an oxidatively stable alkaline serine protease. Extremophiles 6(1):65–72. https://doi.org/10.1007/s007920100224

Saeki K, Magallones MV, Takimura Y, Hatada Y, Kobayashi T, Kawai S, Ito S (2003) Nucleotide and deduced amino acid sequences of a new subtilisin from an alkaliphilic Bacillus isolate. Curr Microbiol 47:337–340. https://doi.org/10.1007/s00284-002-4018-9

Saggu SK, Bala R, Hora R, Mishra PC (2023) Purification and characterization of a high molecular weight serine protease from Microbacterium paraoxydans sp. SKS10. Biotechnol Appl Biochem 70(5):1741–1753. https://doi.org/10.1002/bab.2472

Salwan R, Sharma V (2019) Trends in extracellular serine proteases of bacteria as detergent bioadditive: alternate and environmental friendly tool for detergent industry. Arch Microbiol 201:863–877. https://doi.org/10.1007/s00203-019-01662-8

Savitha S, Sadhasivam S, Swaminathan K, Lin FH (2011) Fungal protease: production, purification and compatibility with laundry detergents and their wash performance. J Taiwan Inst Chem Eng 42:298–304. https://doi.org/10.1016/j.jtice.2010.05.012

Shinde U, Thomas G (2011) Insights from bacterial subtilases into the mechanisms of intramolecular chaperone-mediated activation of furin. Methods Mol Biol 768:59–106. https://doi.org/10.1007/978-1-61779-204-5_4

Sinha R, Khare SK (2013) Characterization of detergent compatible protease of a halophilic Bacillus sp. EMB9: differential role of metal ions in stability and activity. Bioresour Technol 145:357–361. https://doi.org/10.1016/j.biortech.2012.11.024

Soares VF, Castilho LR, Bon EP (2005) High-yield Bacillus subtilis protease production by solid-state fermentation. In: Freire DM (ed) Twenty-sixth symposium on biotechnology for fuels and chemicals. Springer, Cham, pp 311–319

Chapter   Google Scholar  

Sobolev OV, Afonine PV, Moriarty NW, Hekkelman ML, Joosten RP, Perrakis A, Adams PD (2020) A global Ramachandran score identifies protein structures with unlikely stereochemistry. Structure 28(11):1249-1258.e2. https://doi.org/10.1016/j.str.2020.08.005

Song P, Zhang X, Wang S, Xu W, Wang F, Fu R, Wei F (2023) Microbial proteases and their applications. Front Microbiol 14:1236368. https://doi.org/10.3389/fmicb.2023.1236368

Souii A, Gorrab A, Ouertani R, Ouertani A, Hammami K, Saidi N, Souissi Y, Chouchane H, Masmoudi AS, Sghaier H, Cherif A, Neifar M (2023) Sustainable bioethanol production from enzymatically hydrolyzed second-generation Posidonia oceanica waste using stable Microbacterium metallidurans carbohydrate-active enzymes as biocatalysts. Biomass Conv Bioref 13:14585–14604. https://doi.org/10.1007/s13399-022-02915-1

Takenaka S, Takada A, Kimura Y, Watanabe M, Kuntiya A (2022) Improvement of the halotolerance of a Bacillus serine protease by protein surface engineering. J Basic Microbiol 62:174–184. https://doi.org/10.1002/jobm.202100335

Tarek H, Nam KB, Kim YK, Suchi SA, Yoo JC (2023) Biochemical characterization and application of a detergent stable, antimicrobial and antibiofilm potential protease from Bacillus siamensis . Int J Mol Sci 24(6):5774. https://doi.org/10.3390/ijms24065774

Tekin A, Uzuner U, Sezen K (2020) Homology modeling and heterologous expression of highly alkaline subtilisin-like serine protease from Bacillus halodurans C-125. Biotechnol Lett 43:479–494. https://doi.org/10.1007/s10529-020-03025-6

Thebti W, Riahi Y, Belhadj O (2016) Purification and characterization of a new thermostable, Haloalkaline, solvent stable, and detergent compatible serine protease from Geobacillus toebii strain LBT 77. Biomed Res Int 2016:9178962. https://doi.org/10.1155/2016/9178962

Tian LX, Chuan LT, Selimin MA (2022) Effect of protease in commercialized detergent powder on blood removal efficiency. Res Manag Technol Bus. 3(1): 266–274. http://publisher.uthm.edu.my/periodicals/index.php/rmtb

Toyokawa Y, Takahara H, Reungsang A, Fukuta M, Hachimine Y, Tachibana S, Yasuda M (2010) Purification and characterization of a halotolerant serine proteinase from thermotolerant Bacillus licheniformis RKK-04 isolated from Thai fish sauce. Appl Microbiol Biotechnol 86(6):1867–1875. https://doi.org/10.1007/s00253-009-2434-5

Tuysuz E, Gonul-Baltaci N, Omeroglu MA, Adiguzel A, Taskin M, Ozkan H (2020) Co-production of Amylase and protease by locally isolated thermophilic bacterium Anoxybacillus rupiensis T2 in sterile and non-sterile media using waste potato peels as substrate. Waste Biomass Valor 11:6793–6802. https://doi.org/10.1007/s12649-020-00936-3

Uçurum M, Özdemir A, Teke Ç, Serencam H, İpek M (2018) Optimization of adsorption parameters for ultra-fine calcite using a box-behnken experimental design. Open Chem 16:992–1000. https://doi.org/10.1515/chem-2018-0114

Walker JM (2005) The proteomics protocols handbook. Humana Press, Totowa

Book   Google Scholar  

Yamagata Y, Maeda H, Nakajima T, Ichishima E (2002) The molecular surface of proteolytic enzymes has an important role in stability of the enzymatic activity in extraordinary environments. Eur J Biochem 269:4577–4585. https://doi.org/10.1046/j.1432-1033.2002.03153.x

Zorgani MA, Patron K, Ml D (2014) New insight in the structural features of haloadaptation in a-amylases from halophilic archaea following homology modeling strategy: folded and stable conformation maintained through low hydrophobicity and highly negative charged surface. J Comput Aided Mol Des 28:721–734. https://doi.org/10.1007/s10822-014-9754-y

Download references

This research was funded by the Tunisian Ministry of Higher Education and Scientific Research in the ambit of the laboratory projects LR11ES31 and LR16ES20.

Author information

Authors and affiliations.

Laboratory BVBGR-LR11ES31, Institute of Biotechnology of Sidi Thabet, Biotechpole Sidi Thabet, 2020, Ariana, Tunisia

Afwa Gorrab, Rania Ouertani, Khouloud Hammami, Amal Souii, Ahmed Slaheddine Masmoudi & Ameur Cherif

Laboratory of Plant Improvement and Valorization of Agro-resources (APVA-LR16ES20), ENIS, University of Sfax, 3030, Sfax, Tunisia

Fatma Kallel & Mohamed Neifar

Common Services Unit “Bioreactor Coupled with an Ultrafilter”, ENIS, University of Sfax, 3030, Sfax, Tunisia

Mohamed Neifar

You can also search for this author in PubMed   Google Scholar

Contributions

M.N., A.S.M. and A.C. conceived and designed the study; A.G., R.O., A.S., F.K. and K.H. conducted experiments. M.N. and A.G. did bioinformatics and statistical analyses; A.G. and F.K. retrieved references. The first draft of the manuscript was written by A.G. and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohamed Neifar .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest in the publication.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Gorrab, A., Ouertani, R., Hammami, K. et al. In silico and experimental characterization of a new polyextremophilic subtilisin-like protease from Microbacterium metallidurans and its application as a laundry detergent additive. 3 Biotech 14 , 200 (2024). https://doi.org/10.1007/s13205-024-04043-1

Download citation

Received : 27 June 2024

Accepted : 02 August 2024

Published : 12 August 2024

DOI : https://doi.org/10.1007/s13205-024-04043-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Subtilisin-like protease
  • Zero-cost substrate
  • Response surface methodology
  • Stain removal
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. PPT

    experimental design 3 variables

  2. PPT

    experimental design 3 variables

  3. PPT

    experimental design 3 variables

  4. PPT

    experimental design 3 variables

  5. Experimental Design for Three Variables with Three Levels

    experimental design 3 variables

  6. An Intuitive Study of Experimental Design

    experimental design 3 variables

COMMENTS

  1. Guide to Experimental Design

    Published on December 3, 2019 by Rebecca Bevans. Revised on June 21, 2023. Experiments are used to study causal relationships. You manipulate one or more independent variables and measure their effect on one or more dependent variables. Experimental design create a set of procedures to systematically test a hypothesis. A good experimental ...

  2. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  3. Experimental Design

    The " variables " are any factor, trait, or condition that can be changed in the experiment and that can have an effect on the outcome of the experiment. An experiment can have three kinds of variables: i ndependent, dependent, and controlled. The independent variable is one single factor that is changed by the scientist followed by ...

  4. Variables in Research: Breaking Down the Essentials of Experimental Design

    The Role of Variables in Research. In scientific research, variables serve several key functions: Define Relationships: Variables allow researchers to investigate the relationships between different factors and characteristics, providing insights into the underlying mechanisms that drive phenomena and outcomes. Establish Comparisons: By manipulating and comparing variables, scientists can ...

  5. Experimental Design

    Experimental Design. Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes ...

  6. Experimental Design: Definition and Types

    An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental ...

  7. A Quick Guide to Experimental Design

    A good experimental design requires a strong understanding of the system you are studying. There are five key steps in designing an experiment: Consider your variables and how they are related. Write a specific, testable hypothesis. Design experimental treatments to manipulate your independent variable.

  8. Experimental Design for ANOVA

    The term experimental design refers to a plan for conducting an experiment in such a way that research results will be valid and easy to interpret. This plan includes three interrelated activities: ... With respect to the relationship under investigation, an experimental design needs to account for three types of variables: Dependent variable ...

  9. Experimental Design

    A factorial experimental design is used to investigate the effect of two or more independent variables on one dependent variable. For example, let's say a researcher wanted to investigate components for increasing SAT Scores. The three components are: SAT intensive class (yes or no). SAT Prep book (yes or no).

  10. Experimental Research Design

    Experimental research design is centrally concerned with constructing research that is high in causal (internal) validity. Randomized experimental designs provide the highest levels of causal validity. Quasi-experimental designs have a number of potential threats to their causal validity. Yet, new quasi-experimental designs adopted from fields ...

  11. 13. Experimental design

    Key Takeaways. Experimental designs are useful for establishing causality, but some types of experimental design do this better than others. Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables.; Experiments use a control/comparison group and an experimental group to test the effects of ...

  12. Design of experiments

    The design of experiments ( DOE or DOX ), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions ...

  13. Guide to experimental research design

    Pre-experimental research design. A pre-experimental research study is a basic observational study that monitors independent variables' effects. During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. The three subtypes of pre-experimental research design are:

  14. Chapter 1 Principles of Experimental Design

    1.3 The Language of Experimental Design. By an experiment we understand an investigation where the researcher has full control over selecting and altering the experimental conditions of interest, and we only consider investigations of this type. The selected experimental conditions are called treatments.An experiment is comparative if the responses to several treatments are to be compared or ...

  15. Experimental Design

    Description. Experimental design was pioneered by R. A. Fisher in the fields of agriculture and education (Fisher 1935 ). In studies that use experimental design, the independent variables are manipulated or controlled by researchers, which enables the testing of the cause-and-effect relationship between the independent and dependent variables.

  16. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  17. 3.6: Research Design I- Experimental Designs

    3.6: Research Design I- Experimental Designs Expand/collapse global location 3.6: Research Design I- Experimental Designs ... the researcher can determine whether the intervention caused changes in the dependent variable. Overall, this experimental design enables the researcher to make causal inferences about the effectiveness of CBT and ...

  18. Introduction to experiment design (video)

    Replication is the strict repetition of an experimental condition so that the variability associated with the phenomenon can be estimated. It assumes that we can repeat this experiment in every detail. In formal definition "the repetition of the set of all the treatment combinations to be compared in an experiment.

  19. Experimental Design

    Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental ...

  20. A Complete Guide: The 2x3 Factorial Design

    A 2×3 factorial design is a type of experimental design that allows researchers to understand the effects of two independent variables on a single dependent variable.. In this type of design, one independent variable has two levels and the other independent variable has three levels.. For example, suppose a botanist wants to understand the effects of sunlight (low vs. medium vs. high) and ...

  21. The 3 Types Of Experimental Design (2024)

    Experimental design refers to a research methodology that allows researchers to test a hypothesis regarding the effects of an independent variable on a dependent variable. There are three types of experimental design: pre-experimental design, quasi-experimental design, and true experimental design. Contents show.

  22. Experimental Design

    Experimental Design | Types, Definition & Examples. Published on June 9, 2024 by Julia Merkus, MA.Revised on July 22, 2024. An experimental design is a systematic plan for conducting an experiment that aims to test a hypothesis or answer a research question.. It involves manipulating one or more independent variables (IVs) and measuring their effect on one or more dependent variables (DVs ...

  23. Independent vs. Dependent Variables

    Experimental independent variables can be directly manipulated by researchers. ... Example: Quasi-experimental design. You study whether gender identity affects neural responses to infant cries. Your independent variable is a subject variable, namely the gender identity of the participants. You have three groups: men, women and other.

  24. Three Principles of Experimental Design

    These are variables outside of the experimental design that you can measure. A third way is blocking. This refers to identifying related subjects and randomly assigning them to different treatments. In the training experiment, not accounting for gender could make it more difficult to estimate the effects of training. There are at least three ...

  25. 9 Types of Experiment Variables

    An experiment designed to determinate the effect of a fertilizer on plant growth has the following variables:Independent VariablesFertilizerDependent VariablesPlant height, plant weight, number of leavesExtraneous VariablesPlant type, sunlight, water, temperature, air quality, windSituational VariablesSunlight, water, temperature, air quality ...

  26. 3.7: Research Design II- Non-Experimental Designs

    Figure 3.7.1 How a potential confounding variable can affect the relationship between two variables. ... A quasi-experimental design is essentially a hybrid of experimental and non-experimental designs. It aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experimental design ...

  27. Axioms

    Experimental designs are commonly used to produce valid, defensible, and supportable conclusions. Among commonly used block designs, the class of Latin square designs is used to study factors or treatment levels expressed as Latin letters and applying two blocking factors in rows and columns to simultaneously control two sources of nuisance variability.

  28. In silico and experimental characterization of a new ...

    Table 3 Experimental conditions of BBD in natural variables and the corresponding experimental and theoretical responses. ... The statistically significant model equation terms for the four design variables were the linear coefficient β2, the quadratic coefficients β22, β33, and β44, and the regression interaction coefficients β13, β23 ...