Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental research when to use

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved August 9, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Applied Research

Applied Research – Types, Methods and Examples

Transformative Design

Transformative Design – Methods, Types, Guide

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

experimental research when to use

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

experimental research when to use

Life@QuestionPro: Thomas Maiwald-Immer’s Experience

Aug 9, 2024

Top 13 Reporting Tools to Transform Your Data Insights & More

Top 13 Reporting Tools to Transform Your Data Insights & More

Aug 8, 2024

Employee satisfaction

Employee Satisfaction: How to Boost Your  Workplace Happiness?

Aug 7, 2024

jotform vs formstack

Jotform vs Formstack: Which Form Builder Should You Choose?

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

8.1 Experimental design: What is it and when should it be used?

Learning objectives.

  • Define experiment
  • Identify the core features of true experimental designs
  • Describe the difference between an experimental group and a control group
  • Identify and describe the various types of true experimental designs

Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.

experimental research when to use

Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.

Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:

  • random assignment of participants into experimental and control groups
  • a “treatment” (or intervention) provided to the experimental group
  • measurement of the effects of the treatment in a post-test administered to both groups

Some true experiments are more complex.  Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.

Experimental and control groups

In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.

Treatment or intervention

In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.

In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.

The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test .  In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.

Types of experimental design

Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.

Steps in classic experimental design: Sampling to Assignment to Pretest to intervention to Posttest

An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.

In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963).  The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.

Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.

Table 8.1 Solomon four-group design
Group 1 X X X
Group 2 X X
Group 3 X X
Group 4 X

Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.

Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we  will discuss in the next section–can be used.  However, the differences in rigor from true experimental designs leave their conclusions more open to critique.

Experimental design in macro-level research

You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals.  For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change.  There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013).  Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments.  For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.

Key Takeaways

  • True experimental designs require random assignment.
  • Control groups do not receive an intervention, and experimental groups receive an intervention.
  • The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
  • Testing effects may cause researchers to use variations on the classic experimental design.
  • Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
  • Control group- the group in an experiment that does not receive the intervention
  • Experiment- a method of data collection designed to test hypotheses under controlled conditions
  • Experimental group- the group in an experiment that receives the intervention
  • Posttest- a measurement taken after the intervention
  • Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
  • Pretest- a measurement taken prior to the intervention
  • Random assignment-using a random process to assign people into experimental and control groups
  • Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
  • Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
  • True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups

Image attributions

exam scientific experiment by mohamed_hassan CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

experimental research when to use

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experimental research when to use

Enago Academy's Most Popular Articles

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

experimental research when to use

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Promoting Research
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer-Review Week 2023
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experimental research when to use

In your opinion, what is the most effective way to improve integrity in the peer review process?

Exploring Experimental Research: Methodologies, Designs, and Applications Across Disciplines

  • SSRN Electronic Journal

Sereyrath Em at The National University of Cheasim Kamchaymear

  • The National University of Cheasim Kamchaymear

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Rachid Ejjami

  • Khaoula Boussalham

Kimleng Hing

  • COMPUT COMMUN REV

Anastasius Gavras

  • Debbie Rohwer

Sokhom Chan

  • Sorakrich Maneewan
  • Ravinder Koul
  • Int J Contemp Hospit Manag

Anna Mattila

  • J EXP ANAL BEHAV
  • Alan E. Kazdin
  • Jimmie Leppink
  • Keith Morrison
  • Louis Cohen
  • Lawrence Manion
  • ACCOUNT ORG SOC
  • Wim A. Van der Stede
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 5 August 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 6 February 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

  • Types of experimental

Log in or sign up

Get started for free

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

experimental research when to use

Experimental Research

Experimental Research

Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc.

This article is a part of the guide:

  • Pretest-Posttest
  • Third Variable
  • Research Bias
  • Independent Variable
  • Between Subjects

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable.

The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.

Experimental Research is often used where:

  • There is time priority in a causal relationship ( cause precedes effect )
  • There is consistency in a causal relationship (a cause will always lead to the same effect)
  • The magnitude of the correlation is great.

(Reference: en.wikipedia.org)

The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment .

This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group , the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure.

A very wide definition of experimental research, or a quasi experiment , is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition.

A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.

experimental research when to use

Aims of Experimental Research

Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation . Experimental research is important to society - it helps us to improve our everyday lives.

experimental research when to use

Identifying the Research Problem

After deciding the topic of interest, the researcher tries to define the research problem . This helps the researcher to focus on a more narrow research area to be able to study it appropriately.  Defining the research problem helps you to formulate a  research hypothesis , which is tested against the  null hypothesis .

The research problem is often operationalizationed , to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study.

An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.

Constructing the Experiment

There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

Sampling Groups to Study

Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group , whilst others are tested under the experimental conditions.

Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization , "quasi-randomization" and pairing.

Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors .

Here are some common sampling techniques :

  • probability sampling
  • non-probability sampling
  • simple random sampling
  • convenience sampling
  • stratified sampling
  • systematic sampling
  • cluster sampling
  • sequential sampling
  • disproportional sampling
  • judgmental sampling
  • snowball sampling
  • quota sampling

Creating the Design

The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results.

Typical Designs and Features in Experimental Design

  • Pretest-Posttest Design Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect.
  • Control Group Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect . A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.
  • Randomized Controlled Trials Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables
  • Solomon Four-Group Design With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.
  • Between Subjects Design Grouping Participants to Different Conditions
  • Within Subject Design Participants Take Part in the Different Conditions - See also: Repeated Measures Design
  • Counterbalanced Measures Design Testing the effect of the order of treatments when no control group is available/ethical
  • Matched Subjects Design Matching Participants to Create Similar Experimental- and Control-Groups
  • Double-Blind Experiment Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this.
  • Bayesian Probability Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

Pilot Study

It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right.

Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment.

If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s) . Those two different pilots are likely to give the researcher good information about any problems in the experiment.

Conducting the Experiment

An experiment is typically carried out by manipulating a variable, called the independent variable , affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s) , is measured.

Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables , if possible, or randomizing variables to minimize effects that can be traced back to third variables . Researchers only want to measure the effect of the independent variable(s) when conducting an experiment , allowing them to conclude that this was the reason for the effect.

Analysis and Conclusions

In quantitative research , the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect.

The aim of an analysis is to draw a conclusion , together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results.

If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation .

Experiments are more often of quantitative nature than qualitative nature, although it happens.

Examples of Experiments

This website contains many examples of experiments. Some are not true experiments , but involve some kind of manipulation to investigate a phenomenon. Others fulfill most or all criteria of true experiments.

Here are some examples of scientific experiments:

Social Psychology

  • Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous?
  • Asch Experiment - Will people conform to group behavior?
  • Stanford Prison Experiment - How do people react to roles? Will you behave differently?
  • Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior
  • Law Of Segregation - The Mendel Pea Plant Experiment
  • Transforming Principle - Griffith's Experiment about Genetics
  • Ben Franklin Kite Experiment - Struck by Lightning
  • J J Thomson Cathode Ray Experiment
  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Oskar Blakstad (Jul 10, 2008). Experimental Research. Retrieved Aug 11, 2024 from Explorable.com: https://explorable.com/experimental-research

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

experimental research when to use

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

experimental research when to use

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Experimental Research

  • First Online: 25 February 2021

Cite this chapter

experimental research when to use

  • C. George Thomas 2  

4790 Accesses

Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term ‘experiment’ arises from Latin, Experiri, which means, ‘to try’. The knowledge accrues from experiments differs from other types of knowledge in that it is always shaped upon observation or experience. In other words, experiments generate empirical knowledge. In fact, the emphasis on experimentation in the sixteenth and seventeenth centuries for establishing causal relationships for various phenomena happening in nature heralded the resurgence of modern science from its roots in ancient philosophy spearheaded by great Greek philosophers such as Aristotle.

The strongest arguments prove nothing so long as the conclusions are not verified by experience. Experimental science is the queen of sciences and the goal of all speculation . Roger Bacon (1214–1294)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Best, J.W. and Kahn, J.V. 1993. Research in Education (7th Ed., Indian Reprint, 2004). Prentice–Hall of India, New Delhi, 435p.

Google Scholar  

Campbell, D. and Stanley, J. 1963. Experimental and quasi-experimental designs for research. In: Gage, N.L., Handbook of Research on Teaching. Rand McNally, Chicago, pp. 171–247.

Chandel, S.R.S. 1991. A Handbook of Agricultural Statistics. Achal Prakashan Mandir, Kanpur, 560p.

Cox, D.R. 1958. Planning of Experiments. John Wiley & Sons, New York, 308p.

Fathalla, M.F. and Fathalla, M.M.F. 2004. A Practical Guide for Health Researchers. WHO Regional Publications Eastern Mediterranean Series 30. World Health Organization Regional Office for the Eastern Mediterranean, Cairo, 232p.

Fowkes, F.G.R., and Fulton, P.M. 1991. Critical appraisal of published research: Introductory guidelines. Br. Med. J. 302: 1136–1140.

Gall, M.D., Borg, W.R., and Gall, J.P. 1996. Education Research: An Introduction (6th Ed.). Longman, New York, 788p.

Gomez, K.A. 1972. Techniques for Field Experiments with Rice. International Rice Research Institute, Manila, Philippines, 46p.

Gomez, K.A. and Gomez, A.A. 1984. Statistical Procedures for Agricultural Research (2nd Ed.). John Wiley & Sons, New York, 680p.

Hill, A.B. 1971. Principles of Medical Statistics (9th Ed.). Oxford University Press, New York, 390p.

Holmes, D., Moody, P., and Dine, D. 2010. Research Methods for the Bioscience (2nd Ed.). Oxford University Press, Oxford, 457p.

Kerlinger, F.N. 1986. Foundations of Behavioural Research (3rd Ed.). Holt, Rinehart and Winston, USA. 667p.

Kirk, R.E. 2012. Experimental Design: Procedures for the Behavioural Sciences (4th Ed.). Sage Publications, 1072p.

Kothari, C.R. 2004. Research Methodology: Methods and Techniques (2nd Ed.). New Age International, New Delhi, 401p.

Kumar, R. 2011. Research Methodology: A Step-by step Guide for Beginners (3rd Ed.). Sage Publications India, New Delhi, 415p.

Leedy, P.D. and Ormrod, J.L. 2010. Practical Research: Planning and Design (9th Ed.), Pearson Education, New Jersey, 360p.

Marder, M.P. 2011. Research Methods for Science. Cambridge University Press, 227p.

Panse, V.G. and Sukhatme, P.V. 1985. Statistical Methods for Agricultural Workers (4th Ed., revised: Sukhatme, P.V. and Amble, V. N.). ICAR, New Delhi, 359p.

Ross, S.M. and Morrison, G.R. 2004. Experimental research methods. In: Jonassen, D.H. (ed.), Handbook of Research for Educational Communications and Technology (2nd Ed.). Lawrence Erlbaum Associates, New Jersey, pp. 10211043.

Snedecor, G.W. and Cochran, W.G. 1980. Statistical Methods (7th Ed.). Iowa State University Press, Ames, Iowa, 507p.

Download references

Author information

Authors and affiliations.

Kerala Agricultural University, Thrissur, Kerala, India

C. George Thomas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to C. George Thomas .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s)

About this chapter

Thomas, C.G. (2021). Experimental Research. In: Research Methodology and Scientific Writing . Springer, Cham. https://doi.org/10.1007/978-3-030-64865-7_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-64865-7_5

Published : 25 February 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-64864-0

Online ISBN : 978-3-030-64865-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Athl Train
  • v.45(1); Jan-Feb 2010

Study/Experimental/Research Design: Much More Than Statistics

Kenneth l. knight.

Brigham Young University, Provo, UT

The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand.

To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs.

Description:

The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style . At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping them negotiate the “Methods” section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study design with only the statistical analysis of the data. The design statement typically is placed at the end of the “Methods” section as a subsection called “Experimental Design” or as part of a subsection called “Data Analysis.” This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the study design that should be described at the beginning of the “Methods” section are instead placed in the “Statistical Analysis” section or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the “Methods” to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have you jumped back and forth from the early paragraphs of the “Methods” section to the “Statistics” section to try to understand which variables were collected and when? These efforts would be unnecessary if a road map at the beginning of the “Methods” section outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially important if the variables used in the statistical analysis were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of study design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, will allow them to become part of readers' clinical decision making.

WHAT IS A STUDY (OR EXPERIMENTAL OR RESEARCH) DESIGN?

The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style , 2 so I will use it here.

A study design is the architecture of an experimental study 3 and a description of how the study was conducted, 4 including all elements of how the data were obtained. 5 The study design should be the first subsection of the “Methods” section in an experimental manuscript (see the Table ). “Statistical Design” or, preferably, “Statistical Analysis” or “Data Analysis” should be the last subsection of the “Methods” section.

Table. Elements of a “Methods” Section

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-45-1-98-t01.jpg

The “Study Design” subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational study). 2 The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as “A 2 × 4 × 8 factorial guided data collection.” This tells us that there were 3 independent variables (factors), with 2 levels of the first factor, 4 levels of the second factor, and 8 levels of the third factor. Following is a sentence that names the levels of each factor: for example, “The independent variables were sex (male or female), training program (eg, walking, running, weight lifting, or plyometrics), and time (2, 4, 6, 8, 10, 15, 20, or 30 weeks).” Such an approach clearly outlines for readers how the various procedures fit into the overall structure and, therefore, enhances their understanding of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the “Instruments” and “Procedures” subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment order, if the experiment was a repeated-measures design. And although not a part of the design per se, a statement about obtaining written informed consent from participants and institutional review board approval is usually included in this subsection.

The nuts and bolts of the “Methods” section follow, including such things as equipment, materials, protocols, etc. These are beyond the scope of this commentary, however, and so will not be discussed.

The last part of the “Methods” section and last part of the “Study Design” section is the “Data Analysis” subsection. It begins with an explanation of any data manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Next, readers are told of the statistical measures used to analyze the data, such as a mixed 2 × 4 × 8 analysis of variance (ANOVA) with 2 between-groups factors (sex and training program) and 1 within-groups factor (time of measurement). Researchers should state and reference the statistical package and procedure(s) within the package used to compute the statistics. (Various statistical packages perform analyses slightly differently, so it is important to know the package and specific procedure used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL DESIGN VERSUS STATISTICAL ANALYSIS

Avoid using the term statistical design . Statistical methods are only part of the overall design. The term gives too much emphasis to the statistics, which are important, but only one of many tools used in interpreting data and only part of the study design:

The most important issues in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the research, not the numbers with which the data are cited and interpreted. 6

Stated another way, “The justification for the analysis lies not in the data collected but in the manner in which the data were collected.” 3 “Without the solid foundation of a good design, the edifice of statistical analysis is unsafe.” 7 (pp4–5)

The intertwining of study design and statistical analysis may have been caused (unintentionally) by R.A. Fisher, “… a genius who almost single-handedly created the foundations for modern statistical science.” 8 Most research did not involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921) 9 , 10 and experimental design (in 1935). 11 His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Design (see, for example, Edwards 12 ), and ANOVA courses taught in psychology and education departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze data, designs were much simpler, and often there was little difference between study design and statistical analysis. So combining the 2 elements did not cause serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies can be difficult to understand if your only reference point is the statistical analysis and design. (2) Dependent variables are frequently measured at different times. (3) How the data were collected is often not directly correlated with the statistical design.

For example, assume the goal is to determine the strength gain in novice and experienced athletes as a result of 3 strength training programs. Rate of change in strength is not a measurable variable; rather, it is calculated from strength measurements taken at various time intervals during the training. So the study design would be a 2 × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or advanced), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design , however, would be a 2 × 3 factorial with independent variables of experience (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Note that data were collected according to a 3-factor design but were analyzed according to a 2-factor design and that the dependent variables were different. So a single design statement, usually a statistical design statement, would not communicate which data were collected or how. Readers would be left to figure out on their own how the data were collected.

MULTIVARIATE RESEARCH AND THE NEED FOR STUDY DESIGNS

With the advent of electronic data gathering and computerized data handling and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data collection and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of H max and M max are measured 3 times each: before, immediately after, and 30 minutes after a 20-minute treatment with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are important for graphing temperature fluctuations during the procedure, only 3 temperatures (time 0, time 20, and time 50) are used for statistical analysis. A single dependent variable H max :M max ratio is computed to illustrate neural inhibition. Again, a single statistical design statement would tell little about how the data were obtained. And in this example, separate design statements would be needed for temperature measurement and H max :M max measurements.

As stated earlier, drawing conclusions from the data depends more on how the data were measured than on how they were analyzed. 3 , 6 , 7 , 13 So a single study design statement (or multiple such statements) at the beginning of the “Methods” section acts as a road map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the data were collected). Appropriate study design statements also increase the accuracy of conclusions drawn from the study.

CONCLUSIONS

The goal of scientific writing, or any writing, for that matter, is to communicate information. Including 2 design statements or subsections in scientific papers—one to explain how the data were collected and another to explain how they were statistically analyzed—will improve the clarity of communication and bring praise from readers. To summarize:

  • Purge from your thoughts and vocabulary the idea that experimental design and statistical design are synonymous.
  • Study or experimental design plays a much broader role than simply defining and directing the statistical analysis of an experiment.
  • A properly written study design serves as a road map to the “Methods” section of an experiment and, therefore, improves communication with the reader.
  • Study design should include a description of the type of design used, each factor (and each level) involved in the experiment, and the time at which each measurement was made.
  • Clarify when the variables involved in data collection and data analysis are different, such as when data analysis involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thanks to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of research design and statistics.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Experimental Method Works in Psychology

sturti/Getty Images

The Experimental Process

Types of experiments, potential pitfalls of the experimental method.

The experimental method is a type of research procedure that involves manipulating variables to determine if there is a cause-and-effect relationship. The results obtained through the experimental method are useful but do not prove with 100% certainty that a singular cause always creates a specific effect. Instead, they show the probability that a cause will or will not lead to a particular effect.

At a Glance

While there are many different research techniques available, the experimental method allows researchers to look at cause-and-effect relationships. Using the experimental method, researchers randomly assign participants to a control or experimental group and manipulate levels of an independent variable. If changes in the independent variable lead to changes in the dependent variable, it indicates there is likely a causal relationship between them.

What Is the Experimental Method in Psychology?

The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

For example, researchers may want to learn how different visual patterns may impact our perception. Or they might wonder whether certain actions can improve memory . Experiments are conducted on many behavioral topics, including:

The scientific method forms the basis of the experimental method. This is a process used to determine the relationship between two variables—in this case, to explain human behavior .

Positivism is also important in the experimental method. It refers to factual knowledge that is obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables. Then they formulate a hypothesis, manipulate the variables, and collect data on the results. Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on the experiment outcome.

History of the Experimental Method

The idea of using experiments to better understand human psychology began toward the end of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879.

Wundt is often called the father of experimental psychology. He believed that experiments could help explain how psychology works, and used this approach to study consciousness .

Wundt coined the term "physiological psychology." This is a hybrid of physiology and psychology, or how the body affects the brain.

Other early contributors to the development and evolution of experimental psychology as we know it today include:

  • Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus
  • Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions
  • Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology
  • Georg Elias Müller (1850-1934), who performed an early experiment on attitude which involved the sensory discrimination of weights and revealed how anticipation can affect this discrimination

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable

The dependent variable is the effect that the experimenter is measuring. If a researcher was investigating how sleep influences test scores, for example, the test scores would be the dependent variable.

Independent Variable

The independent variable is the variable that the experimenter manipulates. In the previous example, the amount of sleep an individual gets would be the independent variable.

A hypothesis is a tentative statement or a guess about the possible relationship between two or more variables. In looking at how sleep influences test scores, the researcher might hypothesize that people who get more sleep will perform better on a math test the following day. The purpose of the experiment, then, is to either support or reject this hypothesis.

Operational definitions are necessary when performing an experiment. When we say that something is an independent or dependent variable, we must have a very clear and specific definition of the meaning and scope of that variable.

Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment. Types of extraneous variables include participant variables, situational variables, demand characteristics, and experimenter effects. In some cases, researchers can take steps to control for extraneous variables.

Demand Characteristics

Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in a psychology experiment. This can sometimes cause participants to alter their behavior, which can affect the results of the experiment.

Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables. 

Confounding Variables

Confounding variables are variables that can affect the dependent variable, but that experimenters cannot control for. Confounding variables can make it difficult to determine if the effect was due to changes in the independent variable or if the confounding variable may have played a role.

Psychologists, like other scientists, use the scientific method when conducting an experiment. The scientific method is a set of procedures and principles that guide how scientists develop research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:

  • Identifying a problem to study
  • Devising the research protocol
  • Conducting the experiment
  • Analyzing the data collected
  • Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their academic careers. Learning how to conduct an experiment is important to understanding how psychologists prove and disprove theories in this field.

There are a few different types of experiments that researchers might use when studying psychology. Each has pros and cons depending on the participants being studied, the hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control over the variables. These experiments can also be easier for other researchers to replicate. The drawback of this research type is that what takes place in a lab is not always what takes place in the real world.

Field Experiments

Sometimes researchers opt to conduct their experiments in the field. For example, a social psychologist interested in researching prosocial behavior might have a person pretend to faint and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings. But it is more difficult for researchers to control the many variables existing in these settings that could potentially influence the experiment's results.

Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-experiment. Quasi-experiments are often referred to as natural experiments because the researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to manipulate the independent variable in the situation (personality traits). Participants also cannot be randomly assigned because they naturally fall into pre-existing groups based on their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where scientists are interested in studying phenomena in natural, real-world settings. It's also beneficial if there are limits on research funds or time.

Field experiments can be either quasi-experiments or true experiments.

Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors, Researchers use experiments to study many aspects of psychology.

A 2019 study investigated whether splitting attention between electronic devices and classroom lectures had an effect on college students' learning abilities. It found that dividing attention between these two mediums did not affect lecture comprehension. However, it did impact long-term retention of the lecture information, which affected students' exam performance.

An experiment used participants' eye movements and electroencephalogram (EEG) data to better understand cognitive processing differences between experts and novices. It found that experts had higher power in their theta brain waves than novices, suggesting that they also had a higher cognitive load.

A study looked at whether chatting online with a computer via a chatbot changed the positive effects of emotional disclosure often received when talking with an actual human. It found that the effects were the same in both cases.

One experimental study evaluated whether exercise timing impacts information recall. It found that engaging in exercise prior to performing a memory task helped improve participants' short-term memory abilities.

Sometimes researchers use the experimental method to get a bigger-picture view of psychological behaviors and impacts. For example, one 2018 study examined several lab experiments to learn more about the impact of various environmental factors on building occupant perceptions.

A 2020 study set out to determine the role that sensation-seeking plays in political violence. This research found that sensation-seeking individuals have a higher propensity for engaging in political violence. It also found that providing access to a more peaceful, yet still exciting political group helps reduce this effect.

While the experimental method can be a valuable tool for learning more about psychology and its impacts, it also comes with a few pitfalls.

Experiments may produce artificial results, which are difficult to apply to real-world situations. Similarly, researcher bias can impact the data collected. Results may not be able to be reproduced, meaning the results have low reliability .

Since humans are unpredictable and their behavior can be subjective, it can be hard to measure responses in an experiment. In addition, political pressure may alter the results. The subjects may not be a good representation of the population, or groups used may not be comparable.

And finally, since researchers are human too, results may be degraded due to human error.

What This Means For You

Every psychological research method has its pros and cons. The experimental method can help establish cause and effect, and it's also beneficial when research funds are limited or time is of the essence.

At the same time, it's essential to be aware of this method's pitfalls, such as how biases can affect the results or the potential for low reliability. Keeping these in mind can help you review and assess research studies more accurately, giving you a better idea of whether the results can be trusted or have limitations.

Colorado State University. Experimental and quasi-experimental research .

American Psychological Association. Experimental psychology studies human and animals .

Mayrhofer R, Kuhbandner C, Lindner C. The practice of experimental psychology: An inevitably postmodern endeavor . Front Psychol . 2021;11:612805. doi:10.3389/fpsyg.2020.612805

Mandler G. A History of Modern Experimental Psychology .

Stanford University. Wilhelm Maximilian Wundt . Stanford Encyclopedia of Philosophy.

Britannica. Gustav Fechner .

Britannica. Hermann von Helmholtz .

Meyer A, Hackert B, Weger U. Franz Brentano and the beginning of experimental psychology: implications for the study of psychological phenomena today . Psychol Res . 2018;82:245-254. doi:10.1007/s00426-016-0825-7

Britannica. Georg Elias Müller .

McCambridge J, de Bruin M, Witton J.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: A systematic review .  PLoS ONE . 2012;7(6):e39116. doi:10.1371/journal.pone.0039116

Laboratory experiments . In: The Sage Encyclopedia of Communication Research Methods. Allen M, ed. SAGE Publications, Inc. doi:10.4135/9781483381411.n287

Schweizer M, Braun B, Milstone A. Research methods in healthcare epidemiology and antimicrobial stewardship — quasi-experimental designs . Infect Control Hosp Epidemiol . 2016;37(10):1135-1140. doi:10.1017/ice.2016.117

Glass A, Kang M. Dividing attention in the classroom reduces exam performance . Educ Psychol . 2019;39(3):395-408. doi:10.1080/01443410.2018.1489046

Keskin M, Ooms K, Dogru AO, De Maeyer P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking . ISPRS Int J Geo-Inf . 2020;9(7):429. doi:10.3390.ijgi9070429

Ho A, Hancock J, Miner A. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot . J Commun . 2018;68(4):712-733. doi:10.1093/joc/jqy026

Haynes IV J, Frith E, Sng E, Loprinzi P. Experimental effects of acute exercise on episodic memory function: Considerations for the timing of exercise . Psychol Rep . 2018;122(5):1744-1754. doi:10.1177/0033294118786688

Torresin S, Pernigotto G, Cappelletti F, Gasparella A. Combined effects of environmental factors on human perception and objective performance: A review of experimental laboratory works . Indoor Air . 2018;28(4):525-538. doi:10.1111/ina.12457

Schumpe BM, Belanger JJ, Moyano M, Nisa CF. The role of sensation seeking in political violence: An extension of the significance quest theory . J Personal Social Psychol . 2020;118(4):743-761. doi:10.1037/pspp0000223

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

experimental research when to use

Experimental Research: Meaning And Examples Of Experimental Research

Ever wondered why scientists across the world are being lauded for discovering the Covid-19 vaccine so early? It’s because every…

What Is Experimental Research

Ever wondered why scientists across the world are being lauded for discovering the Covid-19 vaccine so early? It’s because every government knows that vaccines are a result of experimental research design and it takes years of collected data to make one. It takes a lot of time to compare formulas and combinations with an array of possibilities across different age groups, genders and physical conditions. With their efficiency and meticulousness, scientists redefined the meaning of experimental research when they discovered a vaccine in less than a year.

What Is Experimental Research?

Characteristics of experimental research design, types of experimental research design, advantages and disadvantages of experimental research, examples of experimental research.

Experimental research is a scientific method of conducting research using two variables: independent and dependent. Independent variables can be manipulated to apply to dependent variables and the effect is measured. This measurement usually happens over a significant period of time to establish conditions and conclusions about the relationship between these two variables.

Experimental research is widely implemented in education, psychology, social sciences and physical sciences. Experimental research is based on observation, calculation, comparison and logic. Researchers collect quantitative data and perform statistical analyses of two sets of variables. This method collects necessary data to focus on facts and support sound decisions. It’s a helpful approach when time is a factor in establishing cause-and-effect relationships or when an invariable behavior is seen between the two.  

Now that we know the meaning of experimental research, let’s look at its characteristics, types and advantages.

The hypothesis is at the core of an experimental research design. Researchers propose a tentative answer after defining the problem and then test the hypothesis to either confirm or disregard it. Here are a few characteristics of experimental research:

  • Dependent variables are manipulated or treated while independent variables are exerted on dependent variables as an experimental treatment. Extraneous variables are variables generated from other factors that can affect the experiment and contribute to change. Researchers have to exercise control to reduce the influence of these variables by randomization, making homogeneous groups and applying statistical analysis techniques.
  • Researchers deliberately operate independent variables on the subject of the experiment. This is known as manipulation.
  • Once a variable is manipulated, researchers observe the effect an independent variable has on a dependent variable. This is key for interpreting results.
  • A researcher may want multiple comparisons between different groups with equivalent subjects. They may replicate the process by conducting sub-experiments within the framework of the experimental design.

Experimental research is equally effective in non-laboratory settings as it is in labs. It helps in predicting events in an experimental setting. It generalizes variable relationships so that they can be implemented outside the experiment and applied to a wider interest group.

The way a researcher assigns subjects to different groups determines the types of experimental research design .

Pre-experimental Research Design

In a pre-experimental research design, researchers observe a group or various groups to see the effect an independent variable has on the dependent variable to cause change. There is no control group as it is a simple form of experimental research . It’s further divided into three categories:

  • A one-shot case study research design is a study where one dependent variable is considered. It’s a posttest study as it’s carried out after treating what presumably caused the change.
  • One-group pretest-posttest design is a study that combines both pretest and posttest studies by testing a single group before and after administering the treatment.
  • Static-group comparison involves studying two groups by subjecting one to treatment while the other remains static. After post-testing all groups the differences are observed.

This design is practical but lacks in certain areas of true experimental criteria.

True Experimental Research Design

This design depends on statistical analysis to approve or disregard a hypothesis. It’s an accurate design that can be conducted with or without a pretest on a minimum of two dependent variables assigned randomly. It is further classified into three types:

  • The posttest-only control group design involves randomly selecting and assigning subjects to two groups: experimental and control. Only the experimental group is treated, while both groups are observed and post-tested to draw a conclusion from the difference between the groups.
  • In a pretest-posttest control group design, two groups are randomly assigned subjects. Both groups are presented, the experimental group is treated and both groups are post-tested to measure how much change happened in each group.
  • Solomon four-group design is a combination of the previous two methods. Subjects are randomly selected and assigned to four groups. Two groups are tested using each of the previous methods.

True experimental research design should have a variable to manipulate, a control group and random distribution.

With experimental research, we can test ideas in a controlled environment before marketing. It acts as the best method to test a theory as it can help in making predictions about a subject and drawing conclusions. Let’s look at some of the advantages that make experimental research useful:

  • It allows researchers to have a stronghold over variables and collect desired results.
  • Results are usually specific.
  • The effectiveness of the research isn’t affected by the subject.
  • Findings from the results usually apply to similar situations and ideas.
  • Cause and effect of a hypothesis can be identified, which can be further analyzed for in-depth ideas.
  • It’s the ideal starting point to collect data and lay a foundation for conducting further research and building more ideas.
  • Medical researchers can develop medicines and vaccines to treat diseases by collecting samples from patients and testing them under multiple conditions.
  • It can be used to improve the standard of academics across institutions by testing student knowledge and teaching methods before analyzing the result to implement programs.
  • Social scientists often use experimental research design to study and test behavior in humans and animals.
  • Software development and testing heavily depend on experimental research to test programs by letting subjects use a beta version and analyzing their feedback.

Even though it’s a scientific method, it has a few drawbacks. Here are a few disadvantages of this research method:

  • Human error is a concern because the method depends on controlling variables. Improper implementation nullifies the validity of the research and conclusion.
  • Eliminating extraneous variables (real-life scenarios) produces inaccurate conclusions.
  • The process is time-consuming and expensive
  • In medical research, it can have ethical implications by affecting patients’ well-being.
  • Results are not descriptive and subjects can contribute to response bias.

Experimental research design is a sophisticated method that investigates relationships or occurrences among people or phenomena under a controlled environment and identifies the conditions responsible for such relationships or occurrences

Experimental research can be used in any industry to anticipate responses, changes, causes and effects. Here are some examples of experimental research :

  • This research method can be used to evaluate employees’ skills. Organizations ask candidates to take tests before filling a post. It is used to screen qualified candidates from a pool of applicants. This allows organizations to identify skills at the time of employment. After training employees on the job, organizations further evaluate them to test impact and improvement. This is a pretest-posttest control group research example where employees are ‘subjects’ and the training is ‘treatment’.
  • Educational institutions follow the pre-experimental research design to administer exams and evaluate students at the end of a semester. Students are the dependent variables and lectures are independent. Since exams are conducted at the end and not the beginning of a semester, it’s easy to conclude that it’s a one-shot case study research.
  • To evaluate the teaching methods of two teachers, they can be assigned two student groups. After teaching their respective groups on the same topic, a posttest can determine which group scored better and who is better at teaching. This method can have its drawbacks as certain human factors, such as attitudes of students and effectiveness to grasp a subject, may negatively influence results. 

Experimental research is considered a standard method that uses observations, simulations and surveys to collect data. One of its unique features is the ability to control extraneous variables and their effects. It’s a suitable method for those looking to examine the relationship between cause and effect in a field setting or in a laboratory. Although experimental research design is a scientific approach, research is not entirely a scientific process. As much as managers need to know what is experimental research , they have to apply the correct research method, depending on the aim of the study.

Harappa’s Thinking Critically program makes you more decisive and lets you think like a leader. It’s a growth-driven course for managers who want to devise and implement sound strategies, freshers looking to build a career and entrepreneurs who want to grow their business. Identify and avoid arguments, communicate decisions and rely on effective decision-making processes in uncertain times. This course teaches critical and clear thinking. It’s packed with problem-solving tools, highly impactful concepts and relatable content. Build an analytical mindset, develop your skills and reap the benefits of critical thinking with Harappa!

Explore Harappa Diaries to learn more about topics such as Main Objective Of Research , Definition Of Qualitative Research , Examples Of Experiential Learning and Collaborative Learning Strategies to upgrade your knowledge and skills.

Thriversitybannersidenav

Our websites may use cookies to personalize and enhance your experience. By continuing without changing your cookie settings, you agree to this collection. For more information, please see our University Websites Privacy Notice .

Neag School of Education

Educational Research Basics by Del Siegle

Experimental research.

The major feature that distinguishes experimental research from other types of research is that the researcher manipulates the independent variable.  There are a number of experimental group designs in experimental research. Some of these qualify as experimental research, others do not.

  • In true experimental research , the researcher not only manipulates the independent variable, he or she also randomly assigned individuals to the various treatment categories (i.e., control and treatment).
  • In quasi experimental research , the researcher does not randomly assign subjects to treatment and control groups. In other words, the treatment is not distributed among participants randomly. In some cases, a researcher may randomly assigns one whole group to treatment and one whole group to control. In this case, quasi-experimental research involves using intact groups in an experiment, rather than assigning individuals at random to research conditions. (some researchers define this latter situation differently. For our course, we will allow this definition).
  • In causal comparative ( ex post facto ) research, the groups are already formed. It does not meet the standards of an experiment because the independent variable in not manipulated.

The statistics by themselves have no meaning. They only take on meaning within the design of your study. If we just examine stats, bread can be deadly . The term validity is used three ways in research…

  • I n the sampling unit, we learn about external validity (generalizability).
  • I n the survey unit, we learn about instrument validity .
  • In this unit, we learn about internal validity and external validity . Internal validity means that the differences that we were found between groups on the dependent variable in an experiment were directly related to what the researcher did to the independent variable, and not due to some other unintended variable (confounding variable). Simply stated, the question addressed by internal validity is “Was the study done well?” Once the researcher is satisfied that the study was done well and the independent variable caused the dependent variable (internal validity), then the research examines external validity (under what conditions [ecological] and with whom [population] can these results be replicated [Will I get the same results with a different group of people or under different circumstances?]). If a study is not internally valid, then considering external validity is a moot point (If the independent did not cause the dependent, then there is no point in applying the results [generalizing the results] to other situations.). Interestingly, as one tightens a study to control for treats to internal validity, one decreases the generalizability of the study (to whom and under what conditions one can generalize the results).

There are several common threats to internal validity in experimental research. They are described in our text.  I have review each below (this material is also included in the  PowerPoint Presentation on Experimental Research for this unit):

  • Subject Characteristics (Selection Bias/Differential Selection) — The groups may have been different from the start. If you were testing instructional strategies to improve reading and one group enjoyed reading more than the other group, they may improve more in their reading because they enjoy it, rather than the instructional strategy you used.
  • Loss of Subjects ( Mortality ) — All of the high or low scoring subject may have dropped out or were missing from one of the groups. If we collected posttest data on a day when the honor society was on field trip at the treatment school, the mean for the treatment group would probably be much lower than it really should have been.
  • Location — Perhaps one group was at a disadvantage because of their location.  The city may have been demolishing a building next to one of the schools in our study and there are constant distractions which interferes with our treatment.
  • Instrumentation Instrument Decay — The testing instruments may not be scores similarly. Perhaps the person grading the posttest is fatigued and pays less attention to the last set of papers reviewed. It may be that those papers are from one of our groups and will received different scores than the earlier group’s papers
  • Data Collector Characteristics — The subjects of one group may react differently to the data collector than the other group. A male interviewing males and females about their attitudes toward a type of math instruction may not receive the same responses from females as a female interviewing females would.
  • Data Collector Bias — The person collecting data my favors one group, or some characteristic some subject possess, over another. A principal who favors strict classroom management may rate students’ attention under different teaching conditions with a bias toward one of the teaching conditions.
  • Testing — The act of taking a pretest or posttest may influence the results of the experiment. Suppose we were conducting a unit to increase student sensitivity to prejudice. As a pretest we have the control and treatment groups watch Shindler’s List and write a reaction essay. The pretest may have actually increased both groups’ sensitivity and we find that our treatment groups didn’t score any higher on a posttest given later than the control group did. If we hadn’t given the pretest, we might have seen differences in the groups at the end of the study.
  • History — Something may happen at one site during our study that influences the results. Perhaps a classmate dies in a car accident at the control site for a study teaching children bike safety. The control group may actually demonstrate more concern about bike safety than the treatment group.
  • Maturation –There may be natural changes in the subjects that can account for the changes found in a study. A critical thinking unit may appear more effective if it taught during a time when children are developing abstract reasoning.
  • Hawthorne Effect — The subjects may respond differently just because they are being studied. The name comes from a classic study in which researchers were studying the effect of lighting on worker productivity. As the intensity of the factor lights increased, so did the work productivity. One researcher suggested that they reverse the treatment and lower the lights. The productivity of the workers continued to increase. It appears that being observed by the researchers was increasing productivity, not the intensity of the lights.
  • John Henry Effect — One group may view that it is competition with the other group and may work harder than than they would under normal circumstances. This generally is applied to the control group “taking on” the treatment group. The terms refers to the classic story of John Henry laying railroad track.
  • Resentful Demoralization of the Control Group — The control group may become discouraged because it is not receiving the special attention that is given to the treatment group. They may perform lower than usual because of this.
  • Regression ( Statistical Regression) — A class that scores particularly low can be expected to score slightly higher just by chance. Likewise, a class that scores particularly high, will have a tendency to score slightly lower by chance. The change in these scores may have nothing to do with the treatment.
  • Implementation –The treatment may not be implemented as intended. A study where teachers are asked to use student modeling techniques may not show positive results, not because modeling techniques don’t work, but because the teacher didn’t implement them or didn’t implement them as they were designed.
  • Compensatory Equalization of Treatmen t — Someone may feel sorry for the control group because they are not receiving much attention and give them special treatment. For example, a researcher could be studying the effect of laptop computers on students’ attitudes toward math. The teacher feels sorry for the class that doesn’t have computers and sponsors a popcorn party during math class. The control group begins to develop a more positive attitude about mathematics.
  • Experimental Treatment Diffusion — Sometimes the control group actually implements the treatment. If two different techniques are being tested in two different third grades in the same building, the teachers may share what they are doing. Unconsciously, the control may use of the techniques she or he learned from the treatment teacher.

When planning a study, it is important to consider the threats to interval validity as we finalize the study design. After we complete our study, we should reconsider each of the threats to internal validity as we review our data and draw conclusions.

Del Siegle, Ph.D. Neag School of Education – University of Connecticut [email protected] www.delsiegle.com

experimental research when to use

Summer is here, and so is the sale. Get a yearly plan with up to 65% off today! 🌴🌞

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

experimental research when to use

HubSpot CRM

experimental research when to use

Google Sheets

experimental research when to use

Google Analytics

experimental research when to use

Microsoft Excel

experimental research when to use

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

experimental research when to use

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is experimental research: Definition, types & examples

What is experimental research: Definition, types & examples

Defne Çobanoğlu

Life and its secrets can only be proven right or wrong with experimentation. You can speculate and theorize all you wish, but as William Blake once said, “ The true method of knowledge is experiment. ”

It may be a long process and time-consuming, but it is rewarding like no other. And there are multiple ways and methods of experimentation that can help shed light on matters. In this article, we explained the definition, types of experimental research, and some experimental research examples . Let us get started with the definition!

  • What is experimental research?

Experimental research is the process of carrying out a study conducted with a scientific approach using two or more variables. In other words, it is when you gather two or more variables and compare and test them in controlled environments. 

With experimental research, researchers can also collect detailed information about the participants by doing pre-tests and post-tests to learn even more information about the process. With the result of this type of study, the researcher can make conscious decisions. 

The more control the researcher has over the internal and extraneous variables, the better it is for the results. There may be different circumstances when a balanced experiment is not possible to conduct. That is why are are different research designs to accommodate the needs of researchers.

  • 3 Types of experimental research designs

There is more than one dividing point in experimental research designs that differentiates them from one another. These differences are about whether or not there are pre-tests or post-tests done and how the participants are divided into groups. These differences decide which experimental research design is used.

Types of experimental research designs

Types of experimental research designs

1 - Pre-experimental design

This is the most basic method of experimental study. The researcher doing pre-experimental research evaluates a group of dependent variables after changing the independent variables . The results of this scientific method are not satisfactory, and future studies are planned accordingly. The pre-experimental research can be divided into three types:

A. One shot case study research design

Only one variable is considered in this one-shot case study design. This research method is conducted in the post-test part of a study, and the aim is to observe the changes in the effect of the independent variable.

B. One group pre-test post-test research design

In this type of research, a single group is given a pre-test before a study is conducted and a post-test after the study is conducted. The aim of this one-group pre-test post-test research design is to combine and compare the data collected during these tests. 

C. Static-group comparison

In a static group comparison, 2 or more groups are included in a study where only a group of participants is subjected to a new treatment and the other group of participants is held static. After the study is done, both groups do a post-test evaluation, and the changes are seen as results.

2 - Quasi-experimental design

This research type is quite similar to the experimental design; however, it changes in a few aspects. Quasi-experimental research is done when experimentation is needed for accurate data, but it is not possible to do one because of some limitations. Because you can not deliberately deprive someone of medical treatment or give someone harm, some experiments are ethically impossible. In this experimentation method, the researcher can only manipulate some variables. There are three types of quasi-experimental design:

A. Nonequivalent group designs

A nonequivalent group design is used when participants can not be divided equally and randomly for ethical reasons. Because of this, different variables will be more than one, unlike true experimental research.

B. Regression discontinuity

In this type of research design, the researcher does not divide a group into two to make a study, instead, they make use of a natural threshold or pre-existing dividing point. Only participants below or above the threshold get the treatment, and as the divide is minimal, the difference would be minimal as well.

C. Natural Experiments

In natural experiments, random or irregular assignment of patients makes up control and study groups. And they exist in natural scenarios. Because of this reason, they do not qualify as true experiments as they are based on observation.

3 - True experimental design

In true experimental research, the variables, groups, and settings should be identical to the textbook definition. Grouping of the participant are divided randomly, and controlled variables are chosen carefully. Every aspect of a true experiment should be carefully designed and acted out. And only the results of a true experiment can really be fully accurate . A true experimental design can be divided into 3 parts:

A. Post-test only control group design

In this experimental design, the participants are divided into two groups randomly. They are called experimental and control groups. Only the experimental group gets the treatment, while the other one does not. After the experiment and observation, both groups are given a post-test, and a conclusion is drawn from the results.

B. Pre-test post-test control group

In this method, the participants are divided into two groups once again. Also, only the experimental group gets the treatment. And this time, they are given both pre-tests and post-tests with multiple research methods. Thanks to these multiple tests, the researchers can make sure the changes in the experimental group are directly related to the treatment.

C. Solomon four-group design

This is the most comprehensive method of experimentation. The participants are randomly divided into 4 groups. These four groups include all possible permutations by including both control and non-control groups and post-test or pre-test and post-test control groups. This method enhances the quality of the data.

  • Advantages and disadvantages of experimental research

Just as with any other study, experimental research also has its positive and negative sides. It is up to the researchers to be mindful of these facts before starting their studies. Let us see some advantages and disadvantages of experimental research:

Advantages of experimental research:

  • All the variables are in the researchers’ control, and that means the researcher can influence the experiment according to the research question’s requirements.
  • As you can easily control the variables in the experiment, you can specify the results as much as possible.
  • The results of the study identify a cause-and-effect relation .
  • The results can be as specific as the researcher wants.
  • The result of an experimental design opens the doors for future related studies.

Disadvantages of experimental research:

  • Completing an experiment may take years and even decades, so the results will not be as immediate as some of the other research types.
  • As it involves many steps, participants, and researchers, it may be too expensive for some groups.
  • The possibility of researchers making mistakes and having a bias is high. It is important to stay impartial
  • Human behavior and responses can be difficult to measure unless it is specifically experimental research in psychology.
  • Examples of experimental research

When one does experimental research, that experiment can be about anything. As the variables and environments can be controlled by the researcher, it is possible to have experiments about pretty much any subject. It is especially crucial that it gives critical insight into the cause-and-effect relationships of various elements. Now let us see some important examples of experimental research:

An example of experimental research in science:

When scientists make new medicines or come up with a new type of treatment, they have to test those thoroughly to make sure the results will be unanimous and effective for every individual. In order to make sure of this, they can test the medicine on different people or creatures in different dosages and in different frequencies. They can double-check all the results and have crystal clear results.

An example of experimental research in marketing:

The ideal goal of a marketing product, advertisement, or campaign is to attract attention and create positive emotions in the target audience. Marketers can focus on different elements in different campaigns, change the packaging/outline, and have a different approach. Only then can they be sure about the effectiveness of their approaches. Some methods they can work with are A/B testing, online surveys , or focus groups .

  • Frequently asked questions about experimental research

Is experimental research qualitative or quantitative?

Experimental research can be both qualitative and quantitative according to the nature of the study. Experimental research is quantitative when it provides numerical and provable data. The experiment is qualitative when it provides researchers with participants' experiences, attitudes, or the context in which the experiment is conducted.

What is the difference between quasi-experimental research and experimental research?

In true experimental research, the participants are divided into groups randomly and evenly so as to have an equal distinction. However, in quasi-experimental research, the participants can not be divided equally for ethical or practical reasons. They are chosen non-randomly or by using a pre-existing threshold.

  • Wrapping it up

The experimentation process can be long and time-consuming but highly rewarding as it provides valuable as well as both qualitative and quantitative data. It is a valuable part of research methods and gives insight into the subjects to let people make conscious decisions.

In this article, we have gathered experimental research definition, experimental research types, examples, and pros & cons to work as a guide for your next study. You can also make a successful experiment using pre-test and post-test methods and analyze the findings. For further information on different research types and for all your research information, do not forget to visit our other articles!

Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

Pilot survey: Definition, questions, tips & more

Pilot survey: Definition, questions, tips & more

Işılay Kırbaş

What is descriptive research: Methods & examples

What is descriptive research: Methods & examples

The benefits of dark mode: Why you should dim the lights

The benefits of dark mode: Why you should dim the lights

  • How it works

researchprospect post subheader

A Complete Guide to Experimental Research

Published by Carmen Troy at August 14th, 2021 , Revised On August 25, 2023

A Quick Guide to Experimental Research

Experimental research refers to the experiments conducted in the laboratory or observation under controlled conditions. Researchers try to find out the cause-and-effect relationship between two or more variables. 

The subjects/participants in the experiment are selected and observed. They receive treatments such as changes in room temperature, diet, atmosphere, or given a new drug to observe the changes. Experiments can vary from personal and informal natural comparisons. It includes three  types of variables ;

  • Independent variable
  • Dependent variable
  • Controlled variable

Before conducting experimental research, you need to have a clear understanding of the experimental design. A true experimental design includes  identifying a problem , formulating a  hypothesis , determining the number of variables, selecting and assigning the participants,  types of research designs , meeting ethical values, etc.

There are many  types of research  methods that can be classified based on:

  • The nature of the problem to be studied
  • Number of participants (individual or groups)
  • Number of groups involved (Single group or multiple groups)
  • Types of data collection methods (Qualitative/Quantitative/Mixed methods)
  • Number of variables (single independent variable/ factorial two independent variables)
  • The experimental design

Types of Experimental Research

Types of Experimental Research

Laboratory Experiment  

It is also called experimental research. This type of research is conducted in the laboratory. A researcher can manipulate and control the variables of the experiment.

Example: Milgram’s experiment on obedience.

Pros Cons
The researcher has control over variables. Easy to establish the relationship between cause and effect. Inexpensive and convenient. Easy to replicate. The artificial environment may impact the behaviour of the participants. Inaccurate results The short duration of the lab experiment may not be enough to get the desired results.

Field Experiment

Field experiments are conducted in the participants’ open field and the environment by incorporating a few artificial changes. Researchers do not have control over variables under measurement. Participants know that they are taking part in the experiment.

Pros Cons
Participants are observed in the natural environment. Participants are more likely to behave naturally. Useful to study complex social issues. It doesn’t allow control over the variables. It may raise ethical issues. Lack of internal validity

Natural Experiments

The experiment is conducted in the natural environment of the participants. The participants are generally not informed about the experiment being conducted on them.

Examples: Estimating the health condition of the population. Did the increase in tobacco prices decrease the sale of tobacco? Did the usage of helmets decrease the number of head injuries of the bikers?

Pros Cons
The source of variation is clear.  It’s carried out in a natural setting. There is no restriction on the number of participants. The results obtained may be questionable. It does not find out the external validity. The researcher does not have control over the variables.

Quasi-Experiments

A quasi-experiment is an experiment that takes advantage of natural occurrences. Researchers cannot assign random participants to groups.

Example: Comparing the academic performance of the two schools.

Pros Cons
Quasi-experiments are widely conducted as they are convenient and practical for a large sample size. It is suitable for real-world natural settings rather than true experimental research design. A researcher can analyse the effect of independent variables occurring in natural conditions. It cannot the influence of independent variables on the dependent variables. Due to the absence of a control group, it becomes difficult to establish the relationship between dependent and independent variables.

Does your Research Methodology Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.

Research-Methodology-ads

How to Conduct Experimental Research?

Step 1. identify and define the problem.

You need to identify a problem as per your field of study and describe your  research question .

Example: You want to know about the effects of social media on the behavior of youngsters. It would help if you found out how much time students spend on the internet daily.

Example: You want to find out the adverse effects of junk food on human health. It would help if you found out how junk food frequent consumption can affect an individual’s health.

Step 2. Determine the Number of Levels of Variables

You need to determine the number of  variables . The independent variable is the predictor and manipulated by the researcher. At the same time, the dependent variable is the result of the independent variable.

Independent variables Dependent variables Confounding Variable
The number of hours youngsters spend on social media daily. The overuse of social media among the youngsters and negative impact on their behaviour. Measure the difference between youngsters’ behaviour with the minimum social media usage and maximum social media utilisation. You can control and minimise the number of hours of using the social media of the participants.
The overconsumption of junk food. Adverse effects of junk food on human health like obesity, indigestion, constipation, high cholesterol, etc. Identify the difference between people’s health with a healthy diet and people eating junk food regularly. You can divide the participants into two groups, one with a healthy diet and one with junk food.

In the first example, we predicted that increased social media usage negatively correlates with youngsters’ negative behaviour.

In the second example, we predicted the positive correlation between a balanced diet and a good healthy and negative relationship between junk food consumption and multiple health issues.

Step 3. Formulate the Hypothesis

One of the essential aspects of experimental research is formulating a hypothesis . A researcher studies the cause and effect between the independent and dependent variables and eliminates the confounding variables. A  null hypothesis is when there is no significant relationship between the dependent variable and the participants’ independent variables. A researcher aims to disprove the theory. H0 denotes it.  The  Alternative hypothesis  is the theory that a researcher seeks to prove.  H1or HA denotes it. 

Null hypothesis 
The usage of social media does not correlate with the negative behaviour of youngsters. Over-usage of social media affects the behaviour of youngsters adversely.
There is no relationship between the consumption of junk food and the health issues of the people. The over-consumption of junk food leads to multiple health issues.

Why should you use a Plagiarism Detector for your Paper?

It ensures:

  • Original work
  • Structure and Clarity
  • Zero Spelling Errors
  • No Punctuation Faults

Plagiarism Detector for your Paper

Step 4. Selection and Assignment of the Subjects

It’s an essential feature that differentiates the experimental design from other research designs . You need to select the number of participants based on the requirements of your experiment. Then the participants are assigned to the treatment group. There should be a control group without any treatment to study the outcomes without applying any changes compared to the experimental group.

Randomisation:  The participants are selected randomly and assigned to the experimental group. It is known as probability sampling. If the selection is not random, it’s considered non-probability sampling.

Stratified sampling : It’s a type of random selection of the participants by dividing them into strata and randomly selecting them from each level. 

Randomisation Stratified sampling
Participants are randomly selected and assigned a specific number of hours to spend on social media. Participants are divided into groups as per their age and then assigned a specific number of hours to spend on social media.
Participants are randomly selected and assigned a balanced diet. Participants are divided into various groups based on their age, gender, and health conditions and assigned to each group’s treatment group.

Matching:   Even though participants are selected randomly, they can be assigned to the various comparison groups. Another procedure for selecting the participants is ‘matching.’ The participants are selected from the controlled group to match the experimental groups’ participants in all aspects based on the dependent variables.  

What is Replicability?

When a researcher uses the same methodology  and subject groups to carry out the experiments, it’s called ‘replicability.’ The  results will be similar each time. Researchers usually replicate their own work to strengthen external validity.

Step 5. Select a Research Design

You need to select a  research design  according to the requirements of your experiment. There are many types of experimental designs as follows.

Type of Research Design Definition
Two-group Post-test only It includes a control group and an experimental group selected randomly or through matching. This experimental design is used when the sample of subjects is large. It is carried out outside the laboratory. Group’s dependent variables are compared after the experiment.
Two-group pre-test post-test only. It includes two groups selected randomly. It involves pre-test and post-test measurements in both groups. It is conducted in a controlled environment.
Soloman 4 group design It includes both post-test-only group and pre-test-post-test control group design with good internal and external validity.
Factorial design Factorial design involves studying the effects of two or more factors with various possible values or levels.
Example: Factorial design applied in optimisation technique.
Randomised block design It is one of the most widely used experimental designs in forestry research. It aims to decrease the experimental error by using blocks and excluding the known sources of variation among the experimental group.
Cross over design In this type of experimental design, the subjects receive various treatments during various periods.
Repeated measures design The same group of participants is measured for one dependant variable at various times or for various dependant variables. Each individual receives experimental treatment consistently. It needs a minimum number of participants. It uses counterbalancing (randomising and reversing the order of subjects and treatment) and increases the treatments/measurements’ time interval.

Step 6. Meet Ethical and Legal Requirements

  • Participants of the research should not be harmed.
  • The dignity and confidentiality of the research should be maintained.
  • The consent of the participants should be taken before experimenting.
  • The privacy of the participants should be ensured.
  • Research data should remain confidential.
  • The anonymity of the participants should be ensured.
  • The rules and objectives of the experiments should be followed strictly.
  • Any wrong information or data should be avoided.

Tips for Meeting the Ethical Considerations

To meet the ethical considerations, you need to ensure that.

  • Participants have the right to withdraw from the experiment.
  • They should be aware of the required information about the experiment.
  • It would help if you avoided offensive or unacceptable language while framing the questions of interviews, questionnaires, or Focus groups.
  • You should ensure the privacy and anonymity of the participants.
  • You should acknowledge the sources and authors in your dissertation using any referencing styles such as APA/MLA/Harvard referencing style.

Step 7. Collect and Analyse Data.

Collect the data  by using suitable data collection according to your experiment’s requirement, such as observations,  case studies ,  surveys ,  interviews , questionnaires, etc. Analyse the obtained information.

Step 8. Present and Conclude the Findings of the Study.

Write the report of your research. Present, conclude, and explain the outcomes of your study .  

Frequently Asked Questions

What is the first step in conducting an experimental research.

The first step in conducting experimental research is to define your research question or hypothesis. Clearly outline the purpose and expectations of your experiment to guide the entire research process.

You May Also Like

What are the different types of research you can use in your dissertation? Here are some guidelines to help you choose a research strategy that would make your research more credible.

Descriptive research is carried out to describe current issues, programs, and provides information about the issue through surveys and various fact-finding methods.

A hypothesis is a research question that has to be proved correct or incorrect through hypothesis testing – a scientific approach to test a hypothesis.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Experimental Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

The Impact of Unconditional Cash Transfers on Consumption and Household Balance Sheets: Experimental Evidence from Two US States

We provide new evidence on the causal effect of unearned income on consumption, balance sheets, and financial outcomes by exploiting an experiment that randomly assigned 1000 individuals to receive $1000 per month and 2000 individuals to receive $50 per month for three years. The transfer increased measured household expenditures by at least $300 per month. The spending impact is positive in most categories, and is largest for housing, food, and car expenses. The treatment increases housing unit and neighborhood mobility. We find noisily estimated modest positive effects on asset values, driven by financial assets, but these gains are offset by higher debt, resulting in a near-zero effect on net worth. The transfer increased self-reported financial health and credit scores but did not affect credit limits, delinquencies, utilization, bankruptcies, or foreclosures. Adjusting for underreporting, we estimate marginal propensities to consume non-durables between 0.44 and 0.55, durables and semi-durables between 0.21 and 0.26, and marginal propensities to de-lever of near zero. These results suggest that large temporary transfers increase short-term consumption and improve financial health but may not cause persistent improvements in the financial position of young, low-income households.

Many people contributed to the success of this project. The program we study and the associated research were supported by generous private funding sources, and we thank the non-profit organizations that implemented the program. We thank Jill Adona, Isaac Ahuvia, Oscar Alonso, Francisco Brady, Jack Bunge, Jake Cosgrove, Leo Dai, Kevin Didi, Rashad Dixon, Marc-Andrea Fiorina, Joshua Lin, Sabrina Liu, Anthony McCanny, Janna Mangasep, Oliver Scott Pankratz, Alok Ranjan, Mark Rick, Ethan Sansom, Sophia Scaglioni, and Angela Wang-Lin for outstanding research assistance. Tess Cotter, Karina Dotson, Aristia Kinis, Sam Manning, Alex Nawar, and Elizabeth Proehl were invaluable contributors through their work at OpenResearch. The management and staff of the Inclusive Economy Lab at the University of Chicago, including Carmelo Barbaro, Janelle Blackwood, Katie Buitrago, Melinda Croes, Crystal Godina, Kelly Hallberg, Kirsten Jacobson, Timi Koyejo, Misuzu Schexnider, Stephen Stapleton, and many others have provided important support throughout all stages of the project. We received valuable feedback on the study from the OpenResearch Advisory Board and seminar participants at the University of California-Berkeley and the University of Illinois at Urbana-Champaign. This study was approved by the Advarra Institutional Review Board (IRB) and is pre-registered at the American Economic Association RCT registry with a registration ID of AEARCTR-0006750. This research was supported in part by a J-PAL grant titled "The Impact of Unconditional Cash Transfers on Consumption: Evidence from the United States." The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

  • randomized controlled trials registry entry

Working Groups

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

2024, 16th Annual Feldstein Lecture, Cecilia E. Rouse," Lessons for Economists from the Pandemic" cover slide

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

jmse-logo

Article Menu

experimental research when to use

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Experimental research on the low-cycle fatigue crack growth rate for a stiffened plate of eh36 steel for use in ship structures.

experimental research when to use

1. Introduction

2. low cycle fatigue crack growth experiment for stiffened plate, 3. result and discussion, 3.1. experimental results of stiffened plates with single-edge crack, 3.2. experimental results of stiffened plates with central crack, 4. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Dong, Q.; Xu, G.; Zhao, J.; Hu, Y. Experimental and numerical study on crack propagation of cracked plates under low cycle fatigue loads. J. Mar. Sci. Eng. 2023 , 11 , 1436. [ Google Scholar ] [ CrossRef ]
  • Dong, Q.; Lu, X.; Xu, G. Experimental study on low-cycle fatigue characteristics of marine structural steel. J. Mar. Sci. Eng. 2024 , 12 , 651. [ Google Scholar ] [ CrossRef ]
  • Gan, J.; Sun, D.; Deng, H.; Wang, Z.; Wang, X.; Yao, L.; Wu, W. Fatigue characteristics of designed T-type specimen under two-step repeating variable amplitude load with low-amplitude load below the fatigue limit. J. Mar. Sci. Eng. 2021 , 9 , 107. [ Google Scholar ] [ CrossRef ]
  • Wang, Q.; Huber, N.; Liu, X.; Kashaev, N. On the analysis of plasticity induced crack closure in welded specimens: A mechanism controlled by the stress intensity factor resulting from residual stresses. Int. J. Fatigue 2022 , 162 , 106940. [ Google Scholar ] [ CrossRef ]
  • Sistaninia, M.; Kolednik, O. A novel approach for determining the stress intensity factor for cracks in multilayered cantilevers. Eng. Fract. Mech. 2022 , 266 , 108386. [ Google Scholar ] [ CrossRef ]
  • Nassiraei, H.; Rezadoost, P. Stress concentration factors in tubular T-joints reinforced with external ring under in-plane bending moment. Ocean. Eng. 2022 , 266 , 112551. [ Google Scholar ] [ CrossRef ]
  • Nassiraei, H.; Rezadoost, P. Probabilistic analysis of the SCFs in tubular T/Y-joints reinforced with FRP under axial, in-plane bending, and out-of-plane bending loads. Structures 2022 , 35 , 1078–1097. [ Google Scholar ] [ CrossRef ]
  • Nassiraei, H.; Rezadoost, P. Stress concentration factors in tubular T/Y-connections reinforced with FRP under in-plane bending load. Mar. Struct. 2021 , 76 , 102871. [ Google Scholar ] [ CrossRef ]
  • Nassiraei, H.; Rezadoost, P. Static capacity of tubular X-joints reinforced with fiber reinforced polymer subjected to compressive load. Eng. Struct. 2021 , 236 , 112041. [ Google Scholar ] [ CrossRef ]
  • Nassiraei, H.; Rezadoost, P. Stress concentration factors in tubular T/Y-joints strengthened with FRP subjected to compressive load in offshore structures. Int. J. Fatigue 2020 , 140 , 105719. [ Google Scholar ] [ CrossRef ]
  • Dong, Q.; Yang, P.; Xu, G. Low cycle fatigue crack growth analysis by CTOD under variable amplitude loading for AH32 steel. Mar. Struct. 2019 , 63 , 257–268. [ Google Scholar ] [ CrossRef ]
  • Dowling, N.E. Geometry effects and the J-integral approach to elastic-plastic fatigue crack growth. In Cracks and Fracture ; Swedlow, J., Williams, M., Eds.; ASTM STP 601; American Society for Testing and Materials: Philadelphia, PA, USA, 1976; pp. 19–32. [ Google Scholar ]
  • Gonzales, G.L.G.; González, J.A.O.; Antunes, F.V.; Neto, D.M.; Díaz, F.A. Experimental determination of the reversed plastic zone around fatigue crack using digital image correlation. Theor. Appl. Fract. Mech. 2023 , 125 , 103901. [ Google Scholar ] [ CrossRef ]
  • Dunham, F.W. Fatigue Testing of Large-Scale Models of Submarine Structural Details. Mar. Technol. SNAME News 1965 , 2 , 299–307. [ Google Scholar ] [ CrossRef ]
  • Chen, L.; Chen, X. The low cycle fatigue tests on submarine structures. Ship Sci. Technol. 1991 , 2 , 19–20. [ Google Scholar ]
  • Li, C. Research on low cycle fatigue properties of several types of steel for submarine pressure shell. Dev. Appl. Mater. 1986 , 12 , 28–37. [ Google Scholar ]
  • Liu, Y.; Zhu, X.; Huang, X. Experimental research on low frequency fatigue crack propagation rate of 921A hull steel structure. J. Nav. Univ. Eng. 2008 , 20 , 69–74. [ Google Scholar ]
  • Jandejsek, I.; Gajdoš, L.; Šperl, M.; Vavřík, D. Analysis of standard fracture toughness test based on digital image correlation data. Eng. Fract. Mech. 2017 , 182 , 607–620. [ Google Scholar ] [ CrossRef ]
  • Zhang, W.; Liu, Y. Plastic zone size estimation under cyclic loadings using in situ optical microscopy fatigue testing. Fatigue Fract. Eng. Mater. Struct. 2011 , 34 , 717–727. [ Google Scholar ] [ CrossRef ]
  • Vasco-Olmo, J.M.; Díaz, F.A.; Antunes, F.V.; James, M.N. Characterization of fatigue crack growth using digital image correlation measurements of plastic CTOD. Theor. Appl. Fract. Mech. 2019 , 101 , 332–341. [ Google Scholar ] [ CrossRef ]
  • Belytschko, T.; Black, T. Elastic crack growth in finite elements with minimal remeshing. Int. J. Numer. Methods Eng. 1999 , 45 , 601–620. [ Google Scholar ] [ CrossRef ]
  • Melenk, J.M.; Babuška, I. The partition of unity finite element method: Basic theory and applications. Comput. Methods Appl. Mech. 1996 , 139 , 289–314. [ Google Scholar ] [ CrossRef ]
  • He, L.; Liu, Z.; Gu, J.; Wang, J.; Men, K. Fatigue crack propagation path and life prediction based on XFEM. J. Northwestern Polytech. Univ. 2019 , 37 , 737–743. [ Google Scholar ] [ CrossRef ]
  • Tu, W.; Yue, J.; Xie, H.; Tang, W. Fatigue crack propagation behavior of high-strength steel under variable amplitude loading. Eng. Fract. Mech. 2021 , 247 , 107642. [ Google Scholar ] [ CrossRef ]
  • Huang, X.; Zhang, X.; Bai, G.; Xu, W.; Wang, H. Residual strength analysis of thin-walled structures with multiple site damage based on crack tip opening angle method. J. Shanghai Jiao Tong Univ. 2013 , 47 , 519–524+531. [ Google Scholar ]
  • Hwang, J.H.; Kim, H.T.; Kim, Y.J.; Nam, H.S.; Kim, J.W. Crack tip fields at crack initiation and growth under monotonic and large amplitude cyclic loading: Experimental and FE analyses. Int. J. Fatigue 2020 , 141 , 105889. [ Google Scholar ] [ CrossRef ]
  • Xiong, K.; Deng, J.; Pei, Z.; Yang, P.; Dong, Q. Analysis of accumulative plasticity and fracture behavior of hull cracked plates subjected to biaxial low-cycle fatigue loading. J. Ship Mech. 2022 , 26 , 113–124. [ Google Scholar ]
  • Wei, X. Fatigue Reliability Analysis of Ship Stiffened Panel Structure Subjected to Multiple Cracks. Master’s Thesis, Harbin Engineering University, Harbin, China, 2017. [ Google Scholar ]
  • Soares, C.G.; Garbatov, Y.; Safety, S. Fatigue reliability of the ship hull girder accounting for inspection and repair. Reliab. Eng. 1996 , 51 , 341–351. [ Google Scholar ] [ CrossRef ]
  • Duncheva, G.; Maximov, J.; Ganev, N.; Ivanova, M. Fatigue life enhancement of welded stiffened S355 steel plates with noncircular openings. J. Constr. Steel Res. 2015 , 112 , 93–107. [ Google Scholar ] [ CrossRef ]
  • Lei, J.; Yue, J.; Xu, Z.; Fang, X.; Liu, H. Theoretical and Experimental Analysis on Low-Cycle Fatigue Crack Initiation for High Strength Steel Stiffened Plates. In Proceedings of the ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering, Hamburg, Germany, 5–10 June 2022. [ Google Scholar ]
  • Dexter, R.J.; Pilarski, P.J. Crack propagation in welded stiffened panels. J. Constr. Steel Res. 2002 , 58 , 1081–1102. [ Google Scholar ] [ CrossRef ]
  • Jiang, W.; Yang, P. Experimental studies on crack propagation and accumulative mean strain of cracked stiffened plates under low-cycle fatigue loads. Ocean. Eng. 2020 , 214 , 107744. [ Google Scholar ] [ CrossRef ]
  • Song, Y.; Yang, P.; Hu, K.; Jiang, W.; Zhang, G. Study of low-cycle fatigue crack growth behavior of central-cracked stiffened plates. Ocean. Eng. 2021 , 241 , 110083. [ Google Scholar ] [ CrossRef ]
  • Dong, Q.; Yang, P.; Deng, J.L.; Wang, D. The theoretical and numerical research on CTOD for ship plate under cyclic loading considering accumulative plastic strain. J. Ship Mech. 2015 , 19 , 1507–1516. [ Google Scholar ]
  • Deng, J.; Yang, P.; Dong, Q.; Wang, D. Research on CTOD for low cycle fatigue analysis of central through cracked plates considering accumulative plastic strain. Eng. Fract. Mech. 2016 , 154 , 128–139. [ Google Scholar ] [ CrossRef ]
  • Dong, Q.; Yang, P.; Xu, G.; Deng, J.L. Mechanisms and modeling of low cycle fatigue crack propagation in a pressure vessel steel Q345. Int. J. Fatigue 2016 , 89 , 2–10. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Elastic Modulus/GPaPoisson’s RatioYield Stress/MPaUltimate Tensile Strength/MPa
2060.3434.94548.91
Specimen NumberP /kNR = P /P Nominal Stress/MPaCrack LocationStiffener Height
P184.24−1120single-edge crack30 mm
P290.72−1130single-edge crack30 mm
P397.20−1140single-edge crack30 mm
P4384.000.031280central crack30 mm
P5420.000.2300central crack30 mm
P6420.000.2300central crack0 mm
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Dong, Q.; Xu, G.; Chen, W. Experimental Research on the Low-Cycle Fatigue Crack Growth Rate for a Stiffened Plate of EH36 Steel for Use in Ship Structures. J. Mar. Sci. Eng. 2024 , 12 , 1365. https://doi.org/10.3390/jmse12081365

Dong Q, Xu G, Chen W. Experimental Research on the Low-Cycle Fatigue Crack Growth Rate for a Stiffened Plate of EH36 Steel for Use in Ship Structures. Journal of Marine Science and Engineering . 2024; 12(8):1365. https://doi.org/10.3390/jmse12081365

Dong, Qin, Geng Xu, and Wei Chen. 2024. "Experimental Research on the Low-Cycle Fatigue Crack Growth Rate for a Stiffened Plate of EH36 Steel for Use in Ship Structures" Journal of Marine Science and Engineering 12, no. 8: 1365. https://doi.org/10.3390/jmse12081365

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

logo

Office of Science Policy

Artificial Intelligence

NIH promotes the safe and responsible use of AI in biomedical research through programs that support the development and use of algorithms and models for research, contribute to AI-ready datasets that accelerate discovery, and encourage multi-disciplinary partnerships that drive transparency, privacy, and equity.

Artificial Intelligence in Research: Policy Considerations and Guidance

Advancements in artificial intelligence (AI) are spurring tremendous progress in medical research to enhance human health and longevity. To that end, NIH has a robust system of policies and practices that guide stakeholders across the biomedical and behavioral research ecosystem. While AI may not be explicitly mentioned, NIH’s policy framework is designed to responsibly guide and govern advancing science and emerging technologies, including development and use of AI technologies in research.

The policies, best practices, and regulations listed below reflect this framework and should be considered before, during, and after development and use of AI in research. This is not an exhaustive list of all policies and requirements that may apply to any NIH-supported research project but can serve as a guide for the research community.

Please note: Unauthorized data disclosures violate several of the policies listed below. Investigators should be cognizant that research data used as input or training for AI could result in their unintentional disclosure if the data is sent to an AI provider external to NIH.

Research Participant Protections

The following establish expectations and best practices for protecting the welfare, privacy, and autonomy of research participants. The ethical considerations embedded in these policies, regulations, and best practices (e.g., privacy) address key issues relevant to the development and use of AI in research. In adhering to them, investigators can mitigate potential harms and inequities arising from the use and development of AI.

Protection of Human Subjects (45 CFR 46) : Outlines basic provisions for Institutional Review Boards, informed consent, and assurance of compliance for NIH-supported research involving human participants and their data, including considerations of risks & benefits.

For clinical investigations that are also regulated by the Food and Drug Administration, see:

21 CFR 50 Protection of Human Subjects 21 CFR 56 Institutional Review Boards

Certificates of Confidentiality : Prohibits the disclosure of identifiable, sensitive research information to anyone not connected to the research except when the participant consents or in a few other specific situations.

NIH Information about Protecting Privacy When Sharing Human Research Participant Data : Provides a set of principles and best practices for protecting the privacy of human research participants when sharing data in NIH-supported research. (Issued under the NIH Data Management and Sharing policy.)

NIH Informed Consent for Secondary Research with Data and Biospecimens : Provides points to consider, instructions for use, and optional sample language that is designed for informed consent documents for research studies that include plans to store and share collected data and biospecimens for future use.

Data Management and Sharing

The following seek to maximize the responsible management and sharing of scientific data while ensuring that researchers consider how the privacy, rights, and confidentiality of human research participants will be protected. Increasing the availability of data through data sharing allows for more accurate development and use of AI models. These policies help ensure that investigators remain good stewards of data used in or produced by AI models.

NIH Data Management & Sharing (DMS) Policy : Establishes the requirement to submit a DMS Plan and comply with NIH-approved plans. In addition, NIH Institutes, Centers, and Offices can request additional or specific information be included within the plan to support programmatic priorities or to expand the utility of the scientific data generated from the research. Also see DMS Policy Frequently Asked Questions .

Responsible Management and Sharing of American Indian/Alaska Native (AI/AN) Participant Data : Describes considerations and best practices for the responsible and respectful management and sharing of AI/AN participant data under the DMS Policy.

NIH Genomic Data Sharing Policy : Promotes and facilitates responsible sharing of large-scale genomic data generated with NIH funds. Also see Genomic Data Sharing Frequently Asked Questions .

Health Information Privacy

Health Insurance Portability and Accountability Act (HIPAA) helps protect the privacy and security of health data used in research, including research involving AI, thereby fostering trust in healthcare research activities.

HIPAA Privacy Rule : Establishes the conditions under which protected health information may be used or disclosed by covered entities for research purposes.

Licensing, Intellectual Property, & Technology Transfer 

The following establish guidance, expectations, and best practices related to intellectual property and software sharing. They complement NIH’s data sharing initiatives, delineate investigator rights under the SBIR and STTR programs, and provide USPTO guidance on AI-related inventions. While many are not specific to AI, the policies and programs below are relevant to investigators who have developed software and source code under NIH research grants or who intend to commercialize their NIH-supported research products, including those related to development and use of AI.

NIH Best Practices for Sharing Research Software : Best practices for sharing research software and source code in a free and open format.

NIH Small Business Innovation Research (SBIR) & Small Business Technology Transfer (STTR) : Unique policies and approaches may apply in the context of NIH’s Small Business Innovation Research (SBIR) & Small Business Technology Transfer (STTR) program. For example, recipients may retain the rights to data generated during the performance of an SBIR or STTR award.

NIH Research Tools Policy : NIH expects funding recipients to appropriately disseminate propagate and allow open access to research tools developed with NIH funding.

US Patent and Trademark Office information about AI : Provides AI-related patent resources and important information concerning AI IP policy.

Peer Review

The following clarifies NIH’s stance on the use of generative AI tools during peer review.

NOT-OD-23-149: Informs the extramural community that the NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative AI technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. Also see Open Mike blog on Using AI in Peer Review Is a Breach of Confidentiality .

Biosecurity and Biosafety

The following establish and are part of a comprehensive biosecurity and biosafety oversight system. Research funded by NIH, including research using the tools and technologies enabled or informed by AI, fall under this oversight framework. While some of these policies do not explicitly address AI, they are still applicable to development and use of AI in research involving biological agents, toxins, or nucleic acid molecules if such research involves physical experiments that are covered under these policies.

United States Government Policy for Oversight of Life Sciences Dual Use Research of Concern : Describes practices and procedures to ensure that dual use research of concern (DURC) is identified at the institutional level and risk mitigation measures are implemented as necessary for U.S. Government-funded research. DURC is “life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security.” The United States Government Policy for Institutional Oversight of Life Sciences Dual Use Research of Concern complements the aforementioned policy and addresses institutional oversight of DURC, which includes policies, practices, and procedures to ensure DURC is identified and risk mitigation measures are implemented, where applicable.

HHS Framework for Guiding Funding Decisions about Proposed Research Involving Enhanced Potential Pandemic Pathogens (HHS P3CO Framework): Guides Department of Health and Human Services funding decisions on individual proposed research that is reasonably anticipated to create, transfer, or use enhanced potential pandemic pathogens (ePPP). ePPP research is research that “may be reasonably anticipated to create, transfer or use potential pandemic pathogens resulting from the enhancement of a pathogen’s transmissibility and/or virulence in humans.” The HHS P3CO Framework is responsive to and in accordance with the  Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight issued in 2017 by the White House Office of Science and Technology Policy.

United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential : On May 6, 2024, the White House Office of Science and Technology Policy released this new policy along with associated Implementation Guidance . This will supersede the DURC and P3CO policy frameworks on May 6, 2025. It provides a unified federal oversight framework for conducting and managing certain types of federally funded life sciences research on biological agents and toxins that have the potential to pose risks to public health, agriculture, food security, economic security, or national security. The policy “encourages institutional oversight of in silico research, regardless of funding source, that could result in the development of potential dual-use computational models directly enabling the design of a [pathogen with enhanced pandemic potential] or a novel biological agent or toxin.”

NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules : Establish safety practices and containment procedures for institutions that receive NIH funding for “basic and clinical research involving recombinant or synthetic nucleic acid molecules, including the creation and use of organisms and viruses containing recombinant or synthetic nucleic acid molecules.”

  • Use of Generative AI in Peer Review FAQs (NIH Office of Extramural Research)
  • NIH Office of Data Science Strategy
  • US Department of Health and Human Services Artificial Intelligence Use Cases Inventory
  • Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence
  • PCAST Report to the President – Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges
  • NIH STRIDES Initiative | NIH STRIDES

For regulatory questions related to AI, see:

  • Artificial Intelligence and Machine Learning in Software as a Medical Device | FDA
  • Artificial Intelligence and Machine Learning (AI/ML) for Drug Development | FDA
  • Artificial Intelligence Program: Research on AI/ML-Based Medical Devices | FDA
  • Digital Health Center of Excellence | FDA

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

JavaScript appears to be disabled on this computer. Please click here to see any active alerts .

Where Rubber Meets the Road: EPA Researchers Study the Environmental and Health Impacts of Tires

Published August 7, 2024

To some people, tire pollution might draw up an image of a blown-out or discarded tire on the side of a highway, or stockpiled old tires behind a garage. However, the issue of tire pollution is more complex and prolific than at first glance, as every step of a tire’s life cycle, from production to use to disposal, can impact our environment, health and wildlife.

A graphic following the lifecycle of a tire, which includes the following steps: production, use and emissions, reuse and disposal, fate and transport, risks, and mitigation.

Meet Our Researchers

experimental research when to use

Meet EPA Ecologist Paul Mayer, Ph.D.

Meet Our Other Ecosystem Researchers

To address growing concerns of tire pollution and a specific pollutant called 6PPD-quinone (6PPD-Q) , EPA researcher Dr. Paul Mayer led an effort to investigate the life cycle of tires and their impacts on the environment. The resulting article, “ Where the rubber meets the road: Emerging environmental impacts of tire wear particles and their chemical cocktails ,” is a holistic examination and data compilation of tires as complex pollutants across three levels: their whole state (e.g., tire production or disposal in landfills), as particulates (i.e., as they are worn down), and as “chemical cocktails.”

The research team illustrated that the production of over 3 billion tires annually requires massive amounts of natural resources, including fossil fuels, water, and agricultural space to grow natural rubber, which has been linked to deforestation. The manufacturing process involves chemical mixtures that emit carcinogens (cancer-causing substances) and radioactive compounds. Over 800 million tires are disposed of annually and burned for fuel or broken down and recycled into products such as artificial turf infill, asphalt, landscape mulch and doormats. These processes may introduce hazards such as contact exposure to chemicals and heavy metals, inhalation, ingestion, and other risks associated with tire crumb. Further, tire piles can catch fire and burn for long periods of time, emitting harmful pollutants such as fine particulate matter (PM2.5) .

The researchers found that one tire will shed between two and fourteen pounds of rubber particles due to road wear (from initial use to initial disposal). These particles may be small enough to be picked up by wind and carried for up to a month before they are deposited on land. Larger particles can be caught in stormwater runoff and transported along curbs and through stormwater systems where they are typically deposited into a local waterway. Constituents of these particles, pollutants such as microplastics , heavy metals, hydrocarbons, and other toxic chemicals can then pollute local water and soil.

The researchers also conducted a life cycle analysis of rubber tires, following one product unit from creation to disposal, identifying information gaps in tire related research along the way. The rate and volume of tire wear particle release may differ between tire brands and types. The size, shape, and surface properties of tire particles can impact the methods of their emission and transport. Further research is also needed to characterize the toxicity of tire pollutants and their health effects, including determining alternative chemicals for use in the manufacturing process and conducting longer term studies on populations of sensitive species. More accurate data on tire particle and chemical emissions based on climate, population density, and transportation infrastructure is needed to support the development of effective methods of tire pollution reduction, remediation, and risk management. These information gaps and many others identified by the research team show that tire wear particles and chemicals present a strong risk to human health and the environment, and action should be taken to research and mitigate this issue.

A close-up view of salmon migrating under water.

Several research teams across the EPA are working on addressing information gaps specifically related to the pollutant 6PPD-Q. 6PPD-Q is the product of a reaction between 6PPD, a chemical added in the tire manufacturing process, and ozone in the air. EPA-funded research in 2020 showed 6PPD-Q in stormwater to be highly toxic to several salmonid fish species and lethal to the threatened and endangered populations of coho salmon. This species is a culturally, economically, and ecologically important resource for many Tribal nations along the Pacific Northwest coast and its connected waterways. Healthy and accessible salmon populations are critical to the health and wellbeing of Tribes, including the practice and protection of Tribal Treaty Rights.

EPA ecologist Dr. Jonathan Halama is using the advanced EPA model Visualizing Ecosystem Land Management Assessments (VELMA) to learn more about the fate and transport of 6PPD-Q from tire particles in stormwater. Through the analysis of current stormwater management systems and estimated roadway deposition patterns based on traffic count data, Halama and his team are working to understand the processes influencing tire particle flow paths and to determine hotspots where 6PPD-Q is concentrated within a watershed. Using VELMA to find these 6PPD-Q hotspots can help researchers prioritize the locations and types of stormwater management designs to reduce 6PPD-Q levels most effectively.

In 2023, the EPA developed a draft analytical method to identify 6PPD-Q in surface waters and stormwater. In addition to tire life cycle analysis and stormwater management modeling, there are multiple research efforts within the EPA and in collaboration with external partners that focus on 6PPD-Q. EPA researchers are developing measurement methods for 6PPD-Q in air and sediment, tools to screen the toxicity of environmental samples, and health hazard screening values. To further protect coho salmon and other sensitive aquatic species, researchers are also investigating brake and tire emission rates of particulates, 6PPD, and metals, health effects of tire wear particles and 6PPD-Q on aquatic life, and potential alternative chemicals to 6PPD in tires.

Dr. Mayer presented about tires as complex pollutants at EPA’s Water Research Webinar on June 26th, 2024. You can watch a recording of the session here .

Learn more about the Science

  • Where the rubber meets the road: Emerging environmental impacts of tire wear particles and their chemical cocktails
  • Watershed analysis of urban stormwater contaminant 6ppd-q hotspots and stream concentrations using a process-based ecohydrological model
  • A ubiquitous tire rubber-derived chemical induces acute mortality in coho salmon
  • Tire Pollution and 6PPD-Quinone Publications
  • Stormwater Management Research
  • Green Infrastructure
  • Science Matters Home
  • Researchers at Work Profiles
  • All Stories

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 25 July 2024

Experimental demonstration of magnetic tunnel junction-based computational random-access memory

  • Yang Lv 1 ,
  • Brandon R. Zink 1 ,
  • Robert P. Bloom 1 ,
  • Hüsrev Cılasun 1 ,
  • Pravin Khanal 2 ,
  • Salonik Resch 1 ,
  • Zamshed Chowdhury 1 ,
  • Ali Habiboglu 2 ,
  • Weigang Wang 2 ,
  • Sachin S. Sapatnekar 1 ,
  • Ulya Karpuzcu 1 &
  • Jian-Ping Wang 1  

npj Unconventional Computing volume  1 , Article number:  3 ( 2024 ) Cite this article

4790 Accesses

240 Altmetric

Metrics details

  • Computational science
  • Electrical and electronic engineering
  • Electronic and spintronic devices
  • Magnetic devices

The conventional computing paradigm struggles to fulfill the rapidly growing demands from emerging applications, especially those for machine intelligence because much of the power and energy is consumed by constant data transfers between logic and memory modules. A new paradigm, called “computational random-access memory (CRAM),” has emerged to address this fundamental limitation. CRAM performs logic operations directly using the memory cells themselves, without having the data ever leave the memory. The energy and performance benefits of CRAM for both conventional and emerging applications have been well established by prior numerical studies. However, there is a lack of experimental demonstration and study of CRAM to evaluate its computational accuracy, which is a realistic and application-critical metric for its technological feasibility and competitiveness. In this work, a CRAM array based on magnetic tunnel junctions (MTJs) is experimentally demonstrated. First, basic memory operations, as well as 2-, 3-, and 5-input logic operations, are studied. Then, a 1-bit full adder with two different designs is demonstrated. Based on the experimental results, a suite of models has been developed to characterize the accuracy of CRAM computation. Scalar addition, multiplication, and matrix multiplication, which are essential building blocks for many conventional and machine intelligence applications, are evaluated and show promising accuracy performance. With the confirmation of MTJ-based CRAM’s accuracy, there is a strong case that this technology will have a significant impact on power- and energy-demanding applications of machine intelligence.

Similar content being viewed by others

experimental research when to use

A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices

experimental research when to use

A compute-in-memory chip based on resistive random-access memory

experimental research when to use

A four-megabit compute-in-memory macro with eight-bit precision based on CMOS and resistive random-access memory for AI edge devices

Introduction.

Recent advances in machine intelligence 1 , 2 for tasks such as recommender systems 3 , speech recognition 4 , natural language processing 5 , and computer vision 6 , have been placing growing demands on our computing systems, especially for implementations with artificial neural networks. A variety of platforms are used, from general-purpose CPUs and GPUs 7 , 8 , to FPGAs 9 , to custom-designed accelerators and processors 10 , 11 , 12 , 13 , to mixed- or fully- analog circuits 14 , 15 , 16 , 17 , 18 , 19 , 20 . Most are based on the Von Neumann architecture, with separate logic and memory systems. As shown in Fig. 1a , the inherent segregation of logic and memory requires large amounts of data to be transferred between these modules. In data-intensive scenarios, this transfer becomes a major bottleneck in terms of performance, energy consumption, and cost 21 , 22 , 23 . For example, the data movement consumes about 200 times the energy used for computation when reading three 64-bit source operands from and writing one 64-bit destination operand to an off-chip main memory 21 . This bottleneck has long been studied. Research aiming at connecting logic and memory more closely has led to new computation paradigms.

figure 1

a , b Compared to a conventional computer architecture ( a ), which suffers from the memory-logic transfer bottleneck, CRAM ( b ) offers significant power and performance improvements. Its unique architecture allows for computation in memory, as well as, random access, reconfigurability, and parallel operation capability. c The CRAM could excel in data-intensive, memory-centric, or power-sensitive applications, such as neural networks, image processing, or edge computing ( c ).

Promising paradigms include “near-memory” and “in-memory” computing. Near-memory processing brings logic physically closer to memory by employing 3D-stacked architectures 24 , 25 , 26 , 27 , 28 , 29 . In-memory computing scatters clusters of logic throughout or around the memory banks on a single chip 14 , 15 , 16 , 17 , 18 , 19 , 20 , 30 , 31 , 32 , 33 , 34 , 35 . Yet another approach is to build systems where the memory itself can perform computation. This has been dubbed “true” in-memory computing 36 , 37 , 38 , 39 , 40 , 41 , 42 . The computational random-access memory (CRAM) 38 , 40 is one of the true in-memory computing paradigms. Logic is performed natively by the memory cells; the data for logic operations never has to leave the memory (Fig. 1b ). Additionally, CRAM operates in a fully digital fashion, unlike most other reported in-memory computing schemes 14 , 15 , 16 , 17 , 18 , 19 , 20 , which are partially or mostly analog. CRAM promises superior energy efficiency and processing performance for machine intelligence applications. It has unique additional features, such as random-access of data and operands, massive parallel computing capabilities, and reconfigurability of operations 38 , 40 . Also note that although the transistor-less (crossbar) architecture employed by most of the previous true-in-memory computing paradigms 36 , 37 , 39 , 42 allows for higher density, the maximum allowable size of the memory array is often severely limited due to the sneak path issues. CRAM includes transistors in each of its cells for better-controlled electrical accessibility and, therefore, a larger array size.

The CRAM was initially proposed based on the MTJ device 38 , an emerging memory device that relies on spin electronics 43 . Such “spintronic” devices, along with other non-volatile emerging memory devices, usually referred to as “X” for logic applications, have been intensively investigated over the past several decades for emerging memory and computing applications as “beyond-CMOS” and/or “CMOS + X” technologies. They offer vastly improved speed, energy efficiency, area, and cost. An additional feature that is exploited by CRAM is their non-volatility 44 . The MTJ device is the most mature of spintronic devices for embedded memory applications, based on endurance 45 , energy efficiency 46 , and speed 47 . We note that CRAM can be implemented based not only on spintronics devices but also on other non-volatile emerging memory devices.

In its simplest form, an MTJ consists of a thin tunneling barrier layer sandwiched between two ferromagnetic (FM) layers. When a voltage is applied between the two layers, electrons tunnel through the barrier, resulting in a charge current. The resistance of the MTJ is a function of the magnetic state of the two FM layers, due to the tunneling magnetoresistance (TMR) effect 48 , 49 , 50 . An MTJ can be engineered to be magnetically bi-stable. Accordingly, it can store information based on its magnetic state. This information can be retrieved by reading the resistance of the device. The MTJ can be electrically switched from one state to the other with a current due to the spin-transfer torque (STT) effect 51 , 52 . In this way, an MTJ can be used as an electrically operated memory device with both read and write functionality. A type of random-access memory, the STT-MRAM 53 , 54 , 55 , 56 has been developed commercially, utilizing MTJs as memory cells. A typical STT-MRAM consists of an array of bit cells, each containing one transistor and one MTJ. These are referred to as 1 transistor 1 MTJ (1T1M) cells.

A typical CRAM cell design, as shown in Fig. 2a , is a modification of the 1T1M STT-MRAM architecture 57 . The MTJ, one of the transistors, word line (WL), bit select line (BSL), and memory bit line (MBL) resemble the 1T1M cell architecture of STT-MRAM, which allows the CRAM to perform memory operations. In order to enable logic operations, a second transistor, as well as a logic line (LL) and a logic bit line (LBL), are added to each memory cell. During a logic operation, corresponding transistors and lines are manipulated so that several MTJs in a row are temporarily connected to a shared LL 40 . While the LL is left floating, voltage pulses are applied to the lines connecting to input MTJs, with that of the output MTJ being grounded. The logic operation is based on a working principle called voltage-controlled logic (VCL) 58 , 59 , which utilizes the thresholding effect that occurs when switching an MTJ and the TMR effect of MTJ. As shown in Fig. 2b , when a voltage is applied across the input MTJs, the different resistance values result in different current levels. The current flows through the output MTJ, which may or may not switch its state, depending on the states of the input MTJs. In this way, basic bitwise logic operations, such as AND, OR, NAND, NOR, and MAJ, can be realized. A unique feature of VCL is that the logic operation itself does not require the data in the input MTJs to be read-out through sense amplifiers at the edge of the array. Rather, it is used locally within the set of MTJs involved in the computation. This is fundamentally why the CRAM computation represents true-in-memory computing: the computation does not require data to travel out of the memory array. It is always processed locally by nearby cells. We note that this concept would also work with other two-terminal stateful passive memory devices, such as memristors. Accordingly, a CRAM could be implemented with such devices. A CRAM could also be implemented with three-terminal stateful devices, such as spin-orbit torque (SOT) devices. This could result in greater energy efficiency and reliability 60 . Although devices with progressive or accumulative switching behavior, such as spintronic domain wall devices 61 , 62 , can be adopted as well, CRAM would otherwise work best if adopting bi-stable memory devices with strong threshold switching behavior. As an oversimplified speculation, the performance comparison between CRAMs implemented by various emerging memory devices is expected to roughly follow the comparison between these for memory applications, since CRAM utilizes memory devices in similar manners as in-memory applications. For example, a CRAM implemented based on MTJs should be expected to offer high endurance and high speed. Also, generally, a CRAM logic operation should consume energy comparable to the energy consumption of a memory write operation, for the same emerging memory device operating at the same speed. However, a careful case-by-case analysis is necessary for CRAMs implemented by each emerging memory device technology. Also, note that we do not show a specific circuit design of CRAM peripherals because CRAM does not require significant circuit design change in sense amplifiers or peripherals compared to 1T1M STT-MRAM. And these in the STT-MRAM are already common and mature. Lastly, the true in-memory computing characteristic of CRAM is limited to within a continuous CRAM array: any computation that requires access to data across separate CRAM arrays will require additional data access and movement. The size of an array is ultimately limited by parasitic effects of interconnects 63 . However, these limitations are true for all other in-memory computing paradigms. CRAM is not at any disadvantage in this scenario.

figure 2

a CRAM adopts the so-called 2 transistor 1 MTJ (2T1M) cell architecture. On top of the 1T1M cell architecture of STT-MRAM, an additional transistor, as well as the added logic line (LL) and logic bit line (LBL), allow the CRAM to perform logic operations. During a CRAM logic operation, the transistors and lines are manipulated to form an equivalent circuit, as shown in b . Although CRAM can be built based on various emerging memory devices, we use MTJs and MTJ-based CRAM as an example for illustration purposes. b The working principle of CRAM logic operation, the VCL, utilizes the thresholding effect that occurs when switching an MTJ and the TMR effect of the MTJ. With an appropriate V logic amplitude, the voltage is translated into different currents flowing through the output MTJ by the TMR effect of the input MTJs. Whether the output MTJ switches or not is dependent on the state of the input MTJs.

On top of the potential performance benefits that the emerging memory devices bring, at circuit and architecture level, CRAM fundamentally provides several benefits (Fig. 1b ): (1) the elimination of the costly performance and energy penalties associated with transferring data between logic and memory; (2) random access of data for the inputs and outputs to operations; (3) the reconfigurability of operations, as any of the logic operations, AND, OR, NAND, NOR, and MAJ can be programmed; and (4) the performance gain of massive parallelism, as identical operations can be performed in parallel in each row of the CRAM array when data is allocated properly. Based on analysis and benchmarking, CRAM has the potential to deliver significant gains in performance and power efficiency, particularly for data-intensive, memory-centric, or power-sensitive applications, such as bioinformatics 40 , 64 , 65 , image 66 and signal 67 processing, neural networks 66 , 68 , and edge computing 69 (Fig. 1c ). For example, a CRAM-based machine-learning inference accelerator was estimated to achieve an improvement on the order of 1000× over a state-of-art solution, in terms of the energy-delay product 70 . Another example shows that CRAM (at the 10 nm technology node) consumes 0.47 µJ and 434 ns of energy and time, respectively, to perform an MNIST handwritten digit classifier task. It is 2500× and 1700× less in energy and time, respectively, compared to a near-memory processing system at the 16 nm technology node 66 . And yet, to date, there have been no experimental studies of CRAM.

In this work, we present the first experimental demonstration of a CRAM array. Although based on a small 1 × 7 array, it successfully shows complete CRAM array operations. We illustrate computation with a 1-bit full adder. This work provides a proof-of-concept as well as a platform with which to study key aspects of the technology experimentally. We provide detailed projections and guidelines for future CRAM design and development. Specifically, based on the experimental results, models and calculations of CRAM logic operations are developed and verified. The results connect the CRAM gate-level accuracy or error rate to MTJ TMR ratio, logic operation pulse width, and other parameters. Then we evaluate the accuracy of a multi-bit adder, a multiplier, and a matrix multiplication unit, which are essential building blocks for many conventional and machine intelligence applications, including artificial neural networks.

Experiments

Figure 3 shows the experimental setup, consisting of both hardware and software. The hardware is built with a so-called ‘circuit-around-die’ approach 71 : semiconductor circuitry is built with commercially available components around the MTJ dies. This approach offers a more rapid development cycle and flexibility needed for exploratory experimental studies on CRAM arrays and potential new MTJ technologies, while the major foundries lack the specific process design kit available for making a CRAM array fully integrated with CMOS. The hardware is a 1 × 7 CRAM array, with the design of cells taken from the 2T1M CRAM cells 38 , 40 , modified for simplified memory access. Software on a PC controls the operation. It communicates with the hardware with basic commands: ‘open/close transistors’; ‘apply voltage pulses’ to perform write and logic operations; and ‘read cell resistance’. The software collects real-time measurements of the data associated with CRAM operations for analysis and visualization. All aspects of the 1 × 7 CRAM array are functional: memory write, memory read, and logic operations (more details in Methods section, and Supplementary Note S 1 ).

figure 3

The setup consists of custom-built hardware and a suite of control software. It demonstrates a fully functioning 1 × 7 CRAM array. The hardware consists of a main board hosting all necessary electronics except for the MTJ devices; a connection board on which passive switches, connectors, and magnetic bias field mechanisms are hosted; and multiple cartridge boards that each have an MTJ array mounted and multiple MTJ devices that are wire bonded. The gray-scale scanning electron microscopy image shows the MTJ array used. The color optical photographs show the cartridge board and the entire hardware setup. The software is responsible for real-time measurements of the MTJs; configuration and execution of CRAM operations: memory write, memory read, and logic; and data collection. It is run on a PC, which communicates wirelessly with the main board.

MTJs with perpendicular interfacial anisotropy are used in the CRAM. They exhibit low resistance-area (RA) product and high TMR ratio—approximately 100%—when sized at 100 nm in diameter (more details in Supplementary Note S 2 ).

Device properties and CRAM memory operations

The experiments begin with measuring the resistance (R)–voltage (V) properties of each MTJ device and of each die. In order to compensate for device-to-device variations, the bias magnetic fields for each MTJ are adjusted so that the R–V properties are as close to each other as possible (more details in Supplementary Note S 2 ). As the processes of making CRAM arrays mature, bias magnetic fields are expected to be no longer needed and all CRAM cells will be able to be operated with uniform parameters and under uniform conditions. The resistance threshold for the MTJs logic states is also determined in this stage.

Then the seven MTJ cells are tested for memory operations with various write pulse amplitudes and widths. Based on the observed write error rates for memory write operations, appropriate pulse amplitudes and widths are configured, achieving reliable memory write operations with an average write error rate of less than 1.5 × 10 −4 (more details in Supplementary Note S 3 ). We designate logic ‘0’ and ‘1’ to the parallel (P) low resistance state and anti-parallel (AP) high resistance state of MTJ, respectively.

CRAM logic operations

Two-input logic operations are studied. The output cell is first initialized by writing ‘0’ to it. Then two input cells are connected to the output cell through the LL by turning on the corresponding transistors. Voltage pulses of amplitude of V logic , V logic , and 0, are simultaneously applied to the two input cells and the output cell, respectively. This is the same as grounding the output cell while applying a voltage pulse of V logic to the two input cells. Then, depending on the input cells’ states, the output cell will have a certain probability of being switched from ‘0’ to ‘1’. Such a cycle of operations is repeated n times, and the statistical mean of the output logic state, < D out >, is obtained. The entire process is repeated for different V logic values and input states. The basis for logic operations in the CRAM is the state-dependent resistance of the input cells. These shift and displace the output cell’s switching probability transfer curve. As a result, the output cell switches state based on specific input states, thereby implementing a logic function such as AND, OR, NAND, NOR, or MAJ. A specific initial state of the output cell and V logic value corresponds to one of these logic gates 66 . The time duration or pulse width of the voltage pulse applied during a logic operation is expected to contribute to most of the time required to complete a logic operation. In the following, we use the term logic speed to generally refer to the speed of a logic operation. Logic speed is approximately inversely proportional to the time duration of the voltage pulse used during a logic operation.

The experimental results are shown in Fig. 4 a, b . Generally, for a given input state, < D out > increases with increasing V logic . The < D out > response curves are input state-dependent. The four input states can be divided into three groups:

The ‘00’ input state yields the lowest resistance at the two input cells, so the output cell switches from ‘0’ to ‘1’ first (with the lowest V logic ).

The ‘11’ input state yields the highest resistance at the two input cells, so the output cell switches from ‘0’ to ‘1’ last (with the highest V logic ).

The ‘01’ and ‘10’ input states both yield resistance that falls in between that of ‘00’ and ‘11’so that the output cell’s response curve falls in between that of ‘00’ and ‘11’.

figure 4

a Output logic average, D out , vs. logic voltage, V logic . In a 2-input logic operation, two input cells and one output MTJ cell are involved. The output cell’s terminal is grounded, while the common line is left floating. A logic operation voltage pulse is applied to the two input cells’ terminals for a fixed duration (pulse width) of 1 ms. Before each logic operation, input data is written to the input cells. After each logic operation, the output cell’s state is read. Each curve corresponds to a specific input state. Each data point represents the statistical average of the output cell’s logic state, < D out >, sampled by 1000 repeats ( n  = 1000) of the operations. The separation between the < D out > curves indicates the margins for NOR or NAND operation, highlighted in blue and red, respectively. b Accuracy of 2-input NAND operation vs. logic voltage, V logic . The results in a can be converted into a more straightforward metric, accuracy, for the NAND truth table. The curve labeled ‘mean’ and ‘worst’ indicates the average and the worst-case accuracy across all input states, respectively. So, for NAND operation, the optimal logic voltage is indicated in such a plot where the mean or worst accuracy is maximized. c , d Accuracy of MAJ3 ( c ) and MAJ5 ( d ) logic operations vs. logic voltage, V logic . Each curve corresponds to an input state or a group of input states. And each data point represents the statistical average of the output MTJ logic state sampled by n  = 1000 and n  = 250, for c and d , respectively.

Figure 4a shows the experiment results. The two regions highlighted in blue and red that fall in between the three groups of response curves are suitable for NOR and NAND operations, respectively. For example, in the red region, the ‘11’ input has a high probability of yielding a ‘0’ output, while the other three input states have a high probability of yielding a ‘1’ output. This matches the expected truth table for a NAND logic gate. Therefore, if V logic is chosen carefully—within the red region for the CRAM 2-input logic operation—the operation performed is highly likely to be NAND.

The experimental results of < D out > can be converted into a straightforward format representing the accuracy for a specified logic function. This translation can be computed by simply subtracting < D out > from 1 for those input states where a ‘0’ output is expected in the truth table of the logic function. Figure 4b shows the NAND accuracy of the same 2-input CRAM logic operation. The ‘mean’ and ‘worst’ plots are based on the average value and minimum value of the accuracy, respectively, across all input state combinations at a fixed value for V logic . Based on the experimental results, if V logic  = 0.624 or 0.616 V, the CRAM delivers a NAND operation with a best mean and a worst-case accuracy of about 99.4% and 99.0%, respectively. From a circuit perspective, both increasing the effective TMR ratio of input cells and/or making the output cell’s response curve steeper would increase the vertical separation of these input state-dependent curves, resulting in higher accuracy. For example, a higher effective TMR ratio of input cells results in a larger contrast of current in the output cell between different input states. Therefore, there is more ‘horizontal’ room to separate the < D out > curves associated with different input states so that for the inputs with which the output is expected to be ‘0’ or ‘1’, the < D out > of the output cell is closer to the expected value (‘0’ or ‘1’). Also note that for a logic operation, the ‘accuracy’ and ‘error rate’ are essentially two quantities describing the same thing: how true the logic operation is, statistically. By definition, the sum of accuracy and error rate is always 1. The higher or closer to 1 the accuracy is, the better. The lower or closer to 0 the error rate is, the better. Lastly, to facilitate better visualization of how the resistance changes of different input cell states are translated into voltage differences on the output cell resulting in it being switched or unswitched, we list the equivalent resistance of the two input cells combined in parallel and the resulting voltage on the output cell as follows: With V logic  = 0.620 V, the equivalent resistance of input cells and the resulting voltage on the output cell are 0.4133 V and 1120 Ω, 0.3753 V, and 1461 Ω, and 0.3248 V and 2037 Ω, for input states ‘00’, ‘01’ or ‘10’, and ‘11’, respectively. Note that these values are estimated by the experiment-based modeling, which is introduced in the later part of this paper.

With more input cells, we also studied 3-input and 5-input majority logic operations. Figure 4c shows the accuracy of a 3-input MAJ3 logic operation. At V logic  = −0.464 V, both the optimal mean and the worst-case accuracy are observed to be 86.5% and 78.0%, respectively. Similarly, for a 5-input MAJ5 logic operation, shown in Fig. 4d , both the optimal mean and the worst-case accuracy are observed to be 75% and 56%, respectively. As expected, comparing 2-input, 3-input, and 5-input logic operations, the accuracy decreases with an increasing number of inputs (more discussions and explanations in Supplementary Note S 4 ).

CRAM full adder

Having demonstrated fundamental elements of CRAM—memory write operations, memory read operations, and logic operations—we turn to more complex operations. We demonstrate a 1-bit full adder. This device takes two 1-bit operands, A and B, as well as a 1-bit carry-in, C, as inputs, and outputs a 1-bit sum, S, and a 1-bit carry-out, C out . A variety of implementations exist. We investigate two common designs: (1) one that uses a combination of majority and inversion logic gates, which we will refer to as a ‘MAJ + NOT’ design; and (2) one that uses only NAND gates, which we will refer to as an ‘all-NAND’ design. Figures 5 a and 5b illustrate these designs. Supplementary Note S 5 provides more details.

figure 5

a , b Illustrations of the ‘MAJ + NOT’ and ‘all-NAND’ 1-bit full adder designs. Green and orange letter symbols indicate input and output data for the full adder, respectively. From left to right, numbered by ‘logic step,’ each drawing shows the intended input (green rectangle) and output (orange rectangle) cells involved in the logic operation. The text in purple under each drawing indicates the intended function of the logic operation (MAJ3, NAND, or MAJ5). c – f Experimental ( c , d ) and simulation ( e , f ) results of the output accuracy of 1-bit full adder operations by CRAM with the MAJ + NOT ( c , e ) and all-NAND ( d , f ) designs. The CRAM adder’s outputs, S and C out , are assessed against the expected values, i.e., their truth table, for all input states of A, B, and C. The accuracy of each result for each input state is shown by the numerical value in black font, as well as, represented by the color of the box with red (or blue) indicating wrong (or correct), or accuracy of 0% (100%). The accuracy is calculated based on the statistical average of outputs obtained by repeating the full adder execution n times, for n  = 10,000. The experimental results for the MAJ + NOT ( c ) and all-NAND ( d ) designs are obtained by repeatedly executing the operation for all input states and observing the output states. The simulation results for the MAJ + NOT ( e ) and all-NAND ( f ) designs are obtained with probabilistic modeling, using Monte Carlo methods. The accuracy of individual logic operations is set to what was observed experimentally.

Figure 5c, f shows the experimental and simulation results for the MAJ + NOT and the all-NAND designs, respectively. Each plot is a colormap that lists the accuracy of the output bits S and C out , with each input state coded as [ABC]. The blue (red) indicates good/desired (bad/undesired) accuracy. In the boxes of colormap, results in saturated blue are the most desirable. The numerical values of accuracy are also labeled accordingly. From the experimental results for the MAJ + NOT design full adder shown in Fig. 5c , we make two observations:

The accuracy of C out is generally higher than that of S. This is because C out is directly produced by the first MAJ3 operation from inputs A, B, and C, while S is produced after additional logic operations. We also note that since C out is produced earlier than S, it is less impacted by error propagation and accumulation during each step; and the MAJ5 involved in producing S is inherently less accurate than the MAJ3.

Both C out and S have higher accuracy when the input [ABC] = 000 or 111 than in the other cases. This is expected since the input states of all ‘0’s and all ‘1’s yield higher accuracy than those with mixed numbers of ‘0’s and ‘1’s for both MAJ3 and MAJ5.

The experimental results for the all-NAND design are shown in Fig. 5d . The same observations regarding accuracy vs. inputs as the MAJ + NOT design apply. However, it is clear that the accuracy of the all-NAND full adder, at 78.5%, is higher than that of the MAJ + NOT full adder, at 63.8%. This is likely due to the fact that 2-input NAND operations are inherently more accurate than MAJ3 and MAJ5 operations. This offsets the impact of the additional steps required in the all-NAND design. We note that the accuracy of all computation blocks will improve as the underlying MTJ technology evolves. Accordingly, the relative accuracy of the all-NAND versus the MAJ + NOT designs may change 66 .

Modeling and analysis of CRAM logic accuracy

To understand the origin of errors, how they accumulate, and how they propagate, we performed numerical simulations of the full adder designs. These are based on probabilistic models of logic operations, implemented using Monte Carlo methods. Figure 5 e, f shows the simulation results for the MAJ + NOT and all-NAND designs, respectively. In these simulations, the accuracy of individual logic operations was set to match what was experimentally observed. The simulation results for the overall designs of the full adders correspond well to what was observed experimentally for these, confirming the validity of the proposed probabilistic models (more details in the Methods section and Supplementary Note S 6 ).

We note that beyond the inherent inaccuracy of logic operations, other factors such as device drift and device-to-device variation in MTJ devices will contribute to error in a CRAM. Specifically, drifts in temperature, external magnetic field, MTJ anisotropy, and MTJ resistance can lead to drift of the response curve, < D out >. Most likely, any such drift will result in a reduction (increase) of accuracy (error rate). More discussion regarding device-to-device variation is provided in Supplementary Note S 7 .

On the other hand, the accuracy of logic operations will significantly benefit from improvements in TMR ratio as MTJ technology evolves. To project the future accuracy of CRAM operations, we employ various types of physical modeling informed by existing experimental results (more details are provided in the Methods section and Supplementary Note S 8 ).

Three sets of assumptions on the accuracies (or error rates) of NAND logic operations underlie the following studies.

The ‘experimental’ assumptions are based on the best accuracy experimentally observed among the 9 NAND steps involved with the all-NAND 1-bit full adder. These are adjusted linearly to ensure that the error for inputs ‘01’ and ‘10’ equals that for input ‘11’. In reality, as supported by the experimental results shown in Fig. 4a , such a condition can be reached by properly tuning the V logic . Therefore, assuming the gate-level error rate is already optimized by tuning the V logic , then the per-input state NAND accuracies can be further simplified so that an error rate, δ (0 ≤  δ  ≤ 1), can be used to characterize the error, accuracy, and probabilistic truth table of NAND operations in a CRAM. The NAND accuracy is [1, 1–δ, 1–δ, 1–δ], and the NAND probabilistic truth table is [1, 1– δ , 1– δ , δ ], both being a function of δ. Through the above-mentioned modeling and calculations, the ‘experimental’ assumptions yield δ  = 0.0076, which corresponds to a TMR ratio of approximately 109%, based on experiments.

Two additional sets of assumptions, labeled as ‘production’ and ‘improved’, assume MTJ TMR ratios of 200% and 300%, respectively. These two assumptions yield δ  = 2.1 × 10 −4 , and δ  = 7.6 × 10 −6 , respectively, based on modeling and calculations. The ‘production’ assumptions represent the current industry-level TMR ratios developed for STT-MRAM technologies. The ‘improved’ assumptions present reasonable expectations for near-future MTJ developments.

CRAM NAND error rates vs. TMR ratio with various logic voltage pulse widths are shown in Fig. 6a . Higher TMR ratios and faster logic speed—so shorter V logic pulse widths—lead to smaller error rates. Further details can be found in Supplementary Note S 8 and in Supplementary Figure S 5 . Also included is an analysis of error rates vs. effective TMR ratio, which is independent of the specific TMR modeling. Note that, for all subsequent results, we will use the NAND error rate at the assumed TMR ratios, with pulse widths of 1 ms. This is more conservative but is consistent with the pulse widths used in the experimental results reported above.

figure 6

a NAND gate minimum error rate vs. MTJ TMR ratio with various V logic pulse widths. For a given TMR ratio, the error rate is a function of V logic . So, the ‘minimum error rate’ represents the minimum error rate achievable with an appropriate V logic value. All subsequent studies use the error rates observed with 1 ms pulse widths (to be consistent with the earlier experimental studies) at assumed TMR ratios. b The NED error of a 4-bit dot-product matrix multiplier vs. TMR ratio. TMR ratios of 109%, 200%, and 300% are adopted for the ‘experimental,’ ‘production,’ and ‘improved’ assumptions, respectively. The size of the input matrix is indicated in the legend of the plot.

Analysis of CRAM multi-bit adder, multiplier, and matrix multiplier

With these defined sets of assumptions, we provide projections of CRAM accuracy at a larger scale for meaningful applications. First, we evaluate ripple-carry adders and array multipliers 72 operating on scalar operands, with up to 6 bits. To evaluate the results, we adopt the normalized error distance (NED) metric 73 to represent the error of these primitives, as it has been shown to be more suitable for arithmetic primitives in the presence of computational error. We will refer to the error for a given primitive as ‘NED error’. We also define a complementary metric of ‘NED accuracy’ as the NED subtracted from 1 and then multiplied by 100%, to facilitate a more intuitive visualization of the error values. While the ‘experimental’ assumptions with a TMR ratio of 109% yield good overall accuracy for adders and multipliers, as the TMR ratio increases, the ‘production’ assumption with a TMR ratio of 200%, and the ‘improved’ assumption with a TMR ratio of 300%, yield significantly better or higher accuracies. Specifically, a 4-bit adder produces NED error of 2.8 × 10 −2 , 8.6 × 10 −4 , and 3.3 × 10 −5 , or NED accuracy of 97.2%, 99.914%, and 99.9967%, for the ‘experimental’, ‘production’, and ‘improved’ assumptions, respectively. A 4-bit multiplier produces NED error of 5.5 × 10 −2 , 1.8 × 10 −3 , and 6.6 × 10 −5 , or NED accuracy of 94.5%, 99.82%, and 99.9934%, for the three sets of assumptions, respectively. It is expected that when comparing the adder to the multiplier, since the latter is more complex and involves more gates, its accuracy is generally lower than that of the adder. Similarly, as the bit width of the adder or multiplier increases, their accuracy decreases. Further details and results with bit width up to 6-bit are provided in the Methods section and in Supplementary Note S 9 .

Then, using these primitives, we evaluate dot-product operations, which form the basis of matrix multiplication. They are heavily employed in many applications in both conventional domains and machine intelligence. Dot products consist of element-wise multiplication of two unsigned integer vectors, followed by addition. We perform additions with binary trees to maintain smaller circuit depth. Figure 6b shows the NED error of a 4-bit 4 × 4 dot-product matrix multiplier with respect to various TMR ratio assumptions. Like the adders and multipliers, as the TMR ratio increases, the NED error decreases, or the NED accuracy improves. Specifically, a 4-bit 4 × 4 dot-product matrix multiplier produces an NED error of 0.11, 3.4 × 10 −3 , and 1.2 × 10 −4 , or NED accuracy of 89%, 99.66%, and 99.988%, for the ‘experimental’, ‘production’, and ‘improved’ assumptions, respectively. Also, when comparing different input sizes (e.g., 1 × 1 to 4 × 4), as expected, the NED error is larger for larger input sizes due to the increased number of gates involved. Further details and results with bit width up to 5-bit are provided in the Methods section and in Supplementary Note S 9 .

Discussions

To summarize the experimental work, an MTJ-based 1 × 7 CRAM array hardware was experimentally demonstrated and systematically evaluated. The basic memory write and read operations of CRAM were achieved with high reliability. The study on CRAM logic operations began with 2-input logic operations. It was found that a 2-input NAND operation could be performed with accuracy as high as 99.4%. As the number of input cells was increased, for example, for 3-input MAJ3 and 5-input MAJ5 operations, the accuracy decreased to 86.5% and 75%, respectively. The decrease was attributed to having too many levels corresponding to the input states crowding a limited operating margin. Next, two versions of a 1-bit full adder were experimentally demonstrated using the 1 × 7 CRAM array: an all-NAND version and a MAJ + NOT version. The all-NAND design achieved an accuracy of 78.5%, while the seemingly simpler MAJ + NOT, which involves 3- and 5-input MAJ operations, only achieved an accuracy of 63.8%. Note that although each type of logic operation achieves optimal accuracy performance with a specific voltage value, the value is expected to only need to be static or constant. Therefore, only a finite number of power rails is needed to accommodate the logic operations of the CRAM array. Also, if the multiple logic pulse duration is allowed by a peripheral design, it is possible to operate the CRAM array with a single set of power rails for both memory write and logic operations.

A probabilistic model was proposed that accounts for the origin of errors, their propagation, and their accumulation during a multi-step CRAM operation. The model was shown to be effective when matched with the experimental results for the 1-bit full adder. The working principles of this model were adopted for the rest of the studies.

A suite of MTJ device circuit models was fitted to the existing experimental data and used to project CRAM NAND gate-level accuracy in the form of error rates. The gate-level error rates were shown to be 7.6 × 10 −6 , with reasonable expectations of TMR ratio improvement as MTJ technology develops. Other device properties, such as the switching probability transfer curve, could also significantly affect the CRAM gate-level error rate. This calls for improvements or new discoveries of the physical mechanisms for memory read-out and memory write. Error is an inherent property of any physical hardware, including CMOS logic components, which are commonly perceived as deterministic. As the development of CRAM proceeds, the gate-level error rate of CRAM will be further reduced over time. For now, while the error rate of CRAM is still higher compared to that of CMOS logic circuits, CRAM is naturally more suitable for applications that require less precision but can still benefit from the true-in-memory computing features and advantages of CRAM, instead of those that require high precision and determinism. Additionally, logic operations with many inputs, such as majority, may be desirable in certain scenarios. And yet, these were shown to have lower accuracy than 2-input operations. So, a tradeoff might exist.

Lastly, building on the experimental demonstration and evaluation of the 1-bit full adder designs, simulation and analysis were performed for larger functional circuits: scalar addition and multiplication up to 6 bits and matrix multiplication up to 5 bits with input size up to 4 × 4. These are essential building blocks for many conventional and machine intelligence applications. The parameters for the simulations were experimentally measured values as well as reasonable projections for future MTJ technology. The results show promising accuracy performance of CRAM at a functional building block level. Furthermore, as device technologies progress, improved performance or new switching mechanisms could further reduce the gate-level error rate of CRAM. Error correction techniques could also be employed to suppress CRAM gate errors.

In summary, this work serves as the first step in experimentally demonstrating the viability, feasibility, and realistic properties of MTJ-based CRAM hardware. Through modeling and simulation, it also lays out the foundation for a coherent view of CRAM, from the device physics level up to the application level. Prior work had established the potential of CRAM through numerical simulation only. Accordingly, there had been considerable interest in the unique features, speed, power, and energy benefits of the technology. This study puts the earlier work on a firm experimental footing, providing application-critical metrics of gate-level accuracy or error rate and linking it to the application accuracy. It paves the way for future work on large-scale applications, in conventional domains as well as new ones emerging in machine intelligence. It also indicates the possibility of making competitive large-scale CMOS-integrated CRAM hardware.

MTJ fabrication and preparation

The MTJ thin film stacks were grown by magnetron sputtering in a 12-source deposition system with a base pressure of 5 × 10 −9  Torr. The MgO barrier was fabricated by RF sputtering, while all the metallic layers were fabricated by DC sputtering. The stack structure is Si/SiO 2 /Ta(3)/Ru(6)/Ta(4)/Mo(1.2)/Co 20 Fe 60 B 20 (1)/MgO(0.9)/Co 20 Fe 60 B 20 (1.4)/Mo(1.9)/Ta(5)/Ru(7), where numbers in brackets indicate the thickness of the layer in nm. The stack was then annealed at 300 °C for 20 minutes in a rapid thermal annealing system under an Ar atmosphere (more information on the MTJ stack fabrication can be found in refs. 74 , 75 ).

The MTJ stacks were fabricated using three rounds of lithography similar to those described in ref. 76 . First, the bottom contacts were defined using photolithography followed by Ar+ ion milling etching. Then, the MTJ pillars were patterned into 120-nm circular nano-pillars via E-beam lithography and etched through Ar+ ion milling. After etching, SiO 2 was deposited via plasma-enhanced chemical vapor deposition (PECVD) to protect the nano-pillars. Finally, the top contacts were defined using photolithography, and the metallic electrodes of Ti (10 nm)/Au (100 nm) were deposited using electron beam evaporation.

The die of the MTJ array was diced into smaller pieces, with each piece containing about 10 MTJ devices. Each of the small pieces was mounted on a cartridge board, and up to 8 MTJ devices were wire-bonded to the electrodes of the cartridge board. Seven cartridge boards were inserted into the connection board, providing MTJs to the CRAM. The MTJ in each CRAM cell is selected among up to 8 MTJs on the corresponding cartridge board. In total, seven MTJs are selected from up to 56 MTJs. This method allows the user to find a collection of seven MTJs with minimum device-device variations.

CRAM experiment

An individual bias magnetic field was implemented for each of the seven MTJs on the connection board by positioning a permanent magnet at a certain distance from the MTJ devices. The bias magnetic field was used to compensate for intrinsic magnetic exchange bias and stray fields in the MTJ devices, thereby restoring the balance between the P and AP states. Additionally, slight rotation of bias field in the device plane was used to effectively adjust the switching voltage of each MTJ. More details can be found in Supplementary Note S 2 .

The connection board with seven MTJs was connected to the main board. On the main board, necessary active and passive electronic components were populated on the custom-designed PCB. The CRAM demo hardware circuit implemented a 1 × 7 CRAM array with a modified architecture to emphasize logic operations while compromising on memory operations bandwidth for simplicity. It was modified from the full-fledged 2T1M 40 architecture. It was equivalent to a 2T1M CRAM in logic mode, but it only had serial access to all cells for memory read and write operations (more details in Supplementary Note S 1 ). The hardware was powered by a battery and communicated with the controller PC wirelessly via Bluetooth®. In this way, the entire hardware was electrically isolated from the environment so that the risk of ESD to these sensitive MTJs was minimized.

The experiment control software running on a PC was implemented using National Instruments’ LabVIEW™. It was responsible for real-time measurements and control of the experiments, as well as necessary visualizations. Certain results were further analyzed post-experiment.

CRAM modeling and simulations

The simulation studies of accuracy as well as error origination, accumulation, and propagation began with a simple probabilistic model of each NAND logic operation. A probabilistic truth table was used to describe the expected statistical average of the output logical state. Then, the 1-bit full adder designs and operations were simulated using the Monte Carlo method with assumed probabilistic truth tables for each of the logic steps (see Supplementary Note S 6 ).

The experiment-based physics modeling and calculations for obtaining the projected CRAM logic operation accuracies began with an MTJ resistance-voltage model 77 , which was fit to the experimental data of TMR vs. bias voltage. The coefficients of this model were scaled accordingly to model projected TMR ratios higher than those observed experimentally. Then, a thermal activation model 78 , 79 of MTJ switching probability was fit to experimental data and was used to calculate the switching probability of the output MTJ cell under various bias voltages. Finally, the average of the output state, < D out >, could be calculated under various V logic , and the optimal NAND accuracies could be obtained in a manner similar to that discussed with Fig. 4 (more details in Supplementary Note S 8 ).

Further simulation studies of a ripple-carry adder, a systolic multiplier, and the dot-product operation of a matrix multiplication for various numbers of bits as well as matrix sizes were carried out using the same methods. More details can be found in Supplementary Note S 9 .

Data availability

The authors declare that the data supporting the findings of this study are available within the paper and its supplementary information files.

Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).

Article   ADS   CAS   PubMed   Google Scholar  

Jordan, M. I. & Mitchell, T. M. Machine learning: trends, perspectives, and prospects. Science 349 , 255–260 (2015).

Article   ADS   MathSciNet   CAS   PubMed   Google Scholar  

Adomavicius, G. & Tuzhilin, A. Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 17 , 734–749 (2005).

Article   Google Scholar  

Hinton, G. et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29 , 82–97 (2012).

Article   ADS   Google Scholar  

Collobert, R. et al. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 , 2493–2537 (2011).

Google Scholar  

Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 60 , 84–90 (2017).

Oh, K. S. & Jung, K. GPU implementation of neural networks. Pattern Recognit. 37 , 1311–1314 (2004).

Strigl, D., Kofler, K. & Podlipnig, S. Performance and scalability of GPU-based convolutional neural networks. In 2010 18th Euromicro Conference on Parallel, Distributed and Network-based Processing 317–324 (IEEE, 2010).

Nurvitadhi, E. et al. Accelerating binarized neural networks: comparison of FPGA, CPU, GPU, and ASIC. In 2016 International Conference on Field-Programmable Technology (FPT) 77–84 (IEEE, 2017).

Sawada, J. et al. TrueNorth ecosystem for brain-inspired computing: scalable systems, software, and applications. In SC ’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 130–141 (IEEE, 2016).

Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture 1–12 (ACM, 2017).

Chen, Y. H., Krishna, T., Emer, J. S. & Sze, V. Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52 , 127–138 (2017).

Yin, S. et al. A high energy efficient reconfigurable hybrid neural network processor for deep learning applications. IEEE J. Solid-State Circuits 53 , 968–982 (2018).

Borghetti, J. et al. Memristive switches enable stateful logic operations via material implication. Nature 464 , 873–876 (2010).

Chi, P. et al. PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) 27–39 (ACM, 2016).

Shafiee, A. et al. ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) , 14–26 (2016).

Hu, M. et al. Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC) 1–6 (IEEE, 2016).

Seshadri, V. et al. Ambit: in-memory accelerator for bulk bitwise operations using commodity DRAM technology. In 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 273–287 (IEEE, 2017).

Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577 , 641–646 (2020).

Jung, S. et al. A crossbar array of magnetoresistive memory devices for in-memory computing. Nature 601 , 211–216 (2022).

Keckler, S. W., Dally, W. J., Khailany, B., Garland, M. & Glasco, D. GPUs and the future of parallel computing. IEEE Micro 31 , 7–17 (2011).

Bergman, K. et al. ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems . www.cse.nd.edu/Reports/2008/TR-2008-13.pdf (2008).

Horowitz, M. Computing’s energy problem (and what we can do about it). In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC) 10–14 (IEEE, 2014).

Kim, D., Kung, J., Chai, S., Yalamanchili, S. & Mukhopadhyay, S. Neurocube: a programmable digital neuromorphic architecture with high-density 3D memory. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) , 380–392 (2016).

Huang, J. et al. Active-routing: compute on the way for near-data processing. In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA) 674–686 (IEEE, 2019).

Nair, R. et al. Active memory cube: a processing-in-memory architecture for exascale systems. IBM J. Res. Dev. 59 , 17:1–17:14 (2015).

Pawlowski, J. T. Hybrid memory cube (HMC). In 2011 IEEE Hot Chips 23 Symposium (HCS) 1–24 (IEEE, 2011).

Gao, M., Ayers, G. & Kozyrakis, C. Practical near-data processing for in-memory analytics frameworks. In 2015 International Conference on Parallel Architecture and Compilation (PACT) 113–124 (IEEE, 2015).

Gao, M., Pu, J., Yang, X., Horowitz, M. & Kozyrakis, C. TETRIS: scalable and efficient neural network acceleration with 3D memory. SIGARCH Comput. Arch. News 45 , 751–764 (2017).

Aga, S. et al. Compute caches. In 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA) 481–492 (IEEE, 2017).

Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1 , 22–29 (2018).

Jeon, K., Ryu, J. J., Jeong, D. S. & Kim, G. H. Dot-product operation in crossbar array using a self-rectifying resistive device. Adv. Mater. Interfaces 9 , 2200392 (2022).

Article   CAS   Google Scholar  

Matsunaga, S. et al. Fabrication of a nonvolatile full adder based on logic-in-memory architecture using magnetic tunnel junctions. Appl. Phys. Express 1 , 091301 (2008).

Hanyu, T. et al. Standby-power-free integrated circuits using MTJ-based VLSI computing. Proc. IEEE 104 , 1844–1863 (2016).

Li, S. et al. Pinatubo: a processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories. In 2016 53rd ACM/EDAC/IEEE Design Automation Conference (DAC) 1–6 (IEEE, 2016).

Kvatinsky, S. et al. Memristor-based material implication (IMPLY) logic: design principles and methodologies. IEEE Trans. Very Large Scale Integr. Syst. 22 , 2054–2066 (2014).

Kvatinsky, S. et al. MAGIC—memristor-aided logic. IEEE Trans. Circuits Syst. II Express Briefs 61 , 895–899 (2014).

Wang, J.-P. & Harms, J. D. General structure for computational random access memory (CRAM). US patent 14/259,568 (2015).

Gupta, S., Imani, M. & Rosing, T. FELIX: fast and energy-efficient logic in memory. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 1–7 (IEEE, 2018).

Chowdhury, Z. et al. Efficient in-memory processing using spintronics. IEEE Comput. Archit. Lett. 17 , 42–46 (2018).

Gao, F., Tziantzioulis, G. & Wentzlaff, D. ComputeDRAM: in-memory compute using off-the-shelf DRAMs. In 2019 52nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 100–113 (IEEE, 2019).

Truong, M. S. Q. et al. RACER: Bit-pipelined processing using resistive memory. In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture 100–116 (ACM, 2021).

Žutić, I., Fabian, J. & Das Sarma, S. Spintronics: fundamentals and applications. Rev. Mod. Phys. 76 , 323–410 (2004).

Nikonov, D. E. & Young, I. A. Benchmarking of beyond-CMOS exploratory devices for logic integrated circuits. IEEE J. Explor. Solid-State Comput. Devices Circuits 1 , 3–11 (2015).

Lee, T. Y. et al. World-most energy-efficient MRAM technology for non-volatile RAM applications. In 2022 International Electron Devices Meeting (IEDM) 10.7.1–10.7.4 (IEEE, 2022).

Jan, G. et al. Demonstration of ultra-low voltage and ultra low power STT-MRAM designed for compatibility with 0x node embedded LLC applications. In 2018 IEEE Symposium on VLSI Technology 65–66 (IEEE, 2018).

Zhao, H. et al. Sub-200 ps spin transfer torque switching in in-plane magnetic tunnel junctions with interface perpendicular anisotropy. J. Phys. D. Appl. Phys. 45 , 025001 (2012).

Julliere, M. Tunneling between ferromagnetic films. Phys. Lett. A 54 , 225–226 (1975).

Parkin, S. S. P. et al. Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers. Nat. Mater. 3 , 862–867 (2004).

Yuasa, S., Nagahama, T., Fukushima, A., Suzuki, Y. & Ando, K. Giant room-temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions. Nat. Mater. 3 , 868–871 (2004).

Berger, L. Emission of spin waves by a magnetic mulitlayer traversed by a current. Phys. Rev. B 54 , 9353–9358 (1996).

Article   ADS   CAS   Google Scholar  

Slonczewski, J. C. Current-driven excitation of magnetic multilayers. J. Magn. Magn. Mater. 159 , L1–L7 (1996).

Wei, L. et al. A 7Mb STT-MRAM in 22FFL FinFET technology with 4ns read sensing time at 0.9V using write-verify-write scheme and offset-cancellation sensing technique. In 2019 IEEE International Solid- State Circuits Conference - (ISSCC) 214–216 (IEEE, 2019).

Gallagher, W. J. et al. 22nm STT-MRAM for reflow and automotive uses with high yield, reliability, and magnetic immunity and with performance and shielding options. In 2019 International Electron Devices Meeting (IEDM) 2.7.1-2.7.4 (IEEE, 2019).

Chih, Y. Der et al. A 22nm 32Mb embedded STT-MRAM with 10ns read speed, 1M cycle write endurance, 10 years retention at 150 °C and high immunity to magnetic field interference. In 2020 IEEE International Solid- State Circuits Conference - (ISSCC) 222–224 (IEEE, 2020).

Edelstein, D. et al. A 14 nm embedded STT-MRAM CMOS technology. In 2020 International Electron Devices Meeting (IEDM) 11.5.1-11.5.4 (IEEE, 2020).

Chun, K. C. et al. A scaling roadmap and performance evaluation of in-plane and perpendicular MTJ based STT-MRAMs for high-density cache memory. IEEE J. Solid-State Circuits 48 , 598–610 (2013).

Lilja, D. J. et al. Systems and methods for direct communication between magnetic tunnel junctions. US patent 13/475,544 (2014).

Lyle, A. et al. Direct communication between magnetic tunnel junctions for nonvolatile logic fan-out architecture. Appl. Phys. Lett. 97 , 152504 (2010).

Zabihi, M. et al. Using spin-Hall MTJs to build an energy-efficient in-memory computation platform. In 20th International Symposium on Quality Electronic Design (ISQED) 52–57 (IEEE, 2019).

Currivan-Incorvia, J. A. et al. Logic circuit prototypes for three-terminal magnetic tunnel junctions with mobile domain walls. Nat. Commun. 7 , 1–7 (2016).

Alamdar, M. et al. Domain wall-magnetic tunnel junction spin-orbit torque devices and circuits for in-memory computing. Appl. Phys. Lett. 118 , 112401 (2021).

Zabihi, M. et al. Analyzing the effects of interconnect parasitics in the STT CRAM in-memory computational platform. IEEE J. Explor. Solid-State Comput. Devices Circuits 6 , 71–79 (2020).

Chowdhury, Z. I. et al. A DNA read alignment accelerator based on computational RAM. IEEE J. Explor. Solid-State Comput. Devices Circuits 6 , 80–88 (2020).

Chowdhury, Z. I. et al. CRAM-Seq: accelerating RNA-Seq abundance quantification using computational RAM. IEEE Trans. Emerg. Top. Comput. 10 , 2055–2071 (2022).

Zabihi, M. et al. In-memory processing on the spintronic CRAM: from hardware design to application mapping. IEEE Trans. Comput. 68 , 1159–1173 (2019).

Article   MathSciNet   Google Scholar  

Cilasun, H. et al. CRAFFT: High resolution FFT accelerator in spintronic computational RAM. In 2020 57th ACM/IEEE Design Automation Conference (DAC) 1–6 (IEEE, 2020).

Resch, S. et al. PIMBALL: Binary neural networks in spintronic memory. ACM Trans. Archit. Code Optim. 16 , 41 (2019).

Chowdhury, Z. I. et al. CAMeleon: reconfigurable B(T)CAM in computational RAM. In Proceedings of the 2021 on Great Lakes Symposium on VLSI 57–63 (ACM, 2021).

Resch, S. et al. MOUSE: inference in non-volatile memory for energy harvesting applications. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 400–414 (IEEE, 2020).

Lv, Y., Bloom, R. P. & Wang, J.-P. Experimental demonstration of probabilistic spin logic by magnetic tunnel junctions. IEEE Magn. Lett. 10 , 1–5 (2019).

Subathradevi, S. & Vennila, C. Systolic array multiplier for augmenting data center networks communication link. Cluster Comput. 22 , 13773–13783 (2019).

Liang, J., Han, J. & Lombardi, F. New metrics for the reliability of approximate and probabilistic adders. IEEE Trans. Comput. 62 , 1760–1771 (2013).

Almasi, H. et al. Perpendicular magnetic tunnel junction with W seed and capping layers. J. Appl. Phys. 121 , 153902 (2017).

Xu, M. et al. Voltage-controlled antiferromagnetism in magnetic tunnel junctions. Phys. Rev. Lett. 124 , 187701 (2020).

Lyu, D. et al. Sub-ns switching and cryogenic-temperature performance of mo-based perpendicular magnetic tunnel junctions. IEEE Electron Device Lett. 43 , 1215–1218 (2022).

Kim, J. et al. A technology-agnostic MTJ SPICE model with user-defined dimensions for STT-MRAM scalability studies. In 2015 IEEE Custom Integrated Circuits Conference (CICC) 1–4 (IEEE, 2015).

Diao, Z. et al. Spin-transfer torque switching in magnetic tunnel junctions and spin-transfer torque random access memory. J. Phys. Condens. Matter 19 , 165209 (2007).

Heindl, R., Rippard, W. H., Russek, S. E., Pufall, M. R. & Kos, A. B. Validity of the thermal activation model for spin-transfer torque switching in magnetic tunnel junctions. J. Appl. Phys. 109 , 073910 (2011).

Download references

Acknowledgements

This work was supported in part by the Defense Advanced Research Projects Agency (DARPA) via No. HR001117S0056-FP-042 “Advanced MTJs for computation in and near random-access memory” and by the National Institute of Standards and Technology. This work was supported in part by NSF SPX grant no. 1725420 and NSF ASCENT grant no. 2230963. The work at the University of Arizona is supported in part by NSF grant no. 2230124. The authors also thank Cisco Inc. for the support. Portions of this work were conducted in the Minnesota Nano Center, which was supported by the National Science Foundation through the National Nanotechnology Coordinated Infrastructure (NNCI) under Award No. ECCS-2025124. The authors acknowledge the Minnesota Supercomputing Institute (MSI, URL: http://www.msi.umn.edu ) at the University of Minnesota for providing resources that contributed to the research results reported within this paper. The authors thank Prof. Marc Riedel and Prof. John Sartori from the Department of Electrical and Computer Engineering at the University of Minnesota for proofreading the manuscript. Yang Lv, Brandon Zink, and Hüsrev Cılasun were CISCO Fellows.

Author information

Authors and affiliations.

Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, 55455, USA

Yang Lv, Brandon R. Zink, Robert P. Bloom, Hüsrev Cılasun, Salonik Resch, Zamshed Chowdhury, Sachin S. Sapatnekar, Ulya Karpuzcu & Jian-Ping Wang

Department of Physics, University of Arizona, Tucson, Arizona, 85721, USA

Pravin Khanal, Ali Habiboglu & Weigang Wang

You can also search for this author in PubMed   Google Scholar

Contributions

J.-P.W. conceived the CRAM research and coordinated the entire project. Y.L. and J.-P.W. designed the experiments. Y.L. and R.P.B. designed and developed the demonstration hardware and software. P.K., A.H., and W.W. grew part of the perpendicular MTJ stacks. B.R.Z. fabricated the MTJ nanodevices. Y.L. conducted the CRAM demonstration experiments and analyzed the results. Y.L. studied the probabilistic model of CRAM operations and conducted simulations of a 1-bit full adder. Y.L., B.R.Z., and R.P.B. developed the device physics modeling of CRAM logic operations and gate-level error rates and conducted related calculations. H.C., S.R., Z.C., and U.K. carried out the simulation studies of the multi-bit adder, multiplier, and matrix multiplication. S.S. participated in discussions of modeling and simulation. All authors reviewed and discussed the results. Y.L. and J.-P.W. wrote the draft manuscript. All authors contributed to the completion of the manuscript.

Corresponding author

Correspondence to Jian-Ping Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Lv, Y., Zink, B.R., Bloom, R.P. et al. Experimental demonstration of magnetic tunnel junction-based computational random-access memory. npj Unconv. Comput. 1 , 3 (2024). https://doi.org/10.1038/s44335-024-00003-3

Download citation

Received : 29 January 2024

Accepted : 29 May 2024

Published : 25 July 2024

DOI : https://doi.org/10.1038/s44335-024-00003-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

experimental research when to use

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

#ForYou? the impact of pro-ana TikTok content on body image dissatisfaction and internalisation of societal beauty standards

Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing

Affiliation Faculty of Business, School of Psychology, Justice and Behavioural Science, Charles Sturt University, Wagga Wagga, New South Wales, Australia

Roles Conceptualization, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

ORCID logo

  • Madison R. Blackburn, 
  • Rachel C. Hogg

PLOS

  • Published: August 7, 2024
  • https://doi.org/10.1371/journal.pone.0307597
  • Peer Review
  • Reader Comments

Table 1

Videos glamourising disordered eating practices and body image concerns readily circulate on TikTok. Minimal empirical research has investigated the impact of TikTok content on body image and eating behaviour. The present study aimed to fill this gap in current research by examining the influence of pro-anorexia TikTok content on young women’s body image and degree of internalisation of beauty standards, whilst also exploring the impact of daily time spent on TikTok and the development of disordered eating behaviours. An experimental and cross-sectional design was used to explore body image and internalisation of beauty standards in relation to pro-anorexia TikTok content. Time spent on TikTok was examined in relation to the risk of developing orthorexia nervosa. A sample of 273 female-identifying persons aged 18–28 years were exposed to either pro-anorexia or neutral TikTok content. Pre- and post-test measures of body image and internalisation of beauty standards were obtained. Participants were divided into four groups based on average daily time spent on TikTok. Women exposed to pro-anorexia content displayed the greatest decrease in body image satisfaction and an increase in internalisation of societal beauty standards. Women exposed to neutral content also reported a decrease in body image satisfaction. Participants categorised as high and extreme daily TikTok users reported greater average disordered eating behaviour on the EAT-26 than participants with low and moderate use, however this finding was not statistically significant in relation to orthorexic behaviours. This research has implications for the mental health of young female TikTok users, with exposure to pro-anorexia content having immediate consequences for internalisation and body image dissatisfaction, potentially increasing one’s risk of developing disordered eating beliefs and behaviours.

Citation: Blackburn MR, Hogg RC (2024) #ForYou? the impact of pro-ana TikTok content on body image dissatisfaction and internalisation of societal beauty standards. PLoS ONE 19(8): e0307597. https://doi.org/10.1371/journal.pone.0307597

Editor: Barbara Guidi, University of Pisa, ITALY

Received: November 2, 2023; Accepted: July 8, 2024; Published: August 7, 2024

Copyright: © 2024 Blackburn, Hogg. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The data for this study can be found on Figshare via the following link: https://doi.org/10.6084/m9.figshare.25756800.v1 .

Funding: We acknowledge the financial support provided by Charles Sturt University.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Social media is a self-presentation device, a mode of entertainment, and a means of connecting with others [ 1 ], allowing for performance and the performance of identity [ 2 ], with social rewards built into its systems. Five to six years of the average human lifespan are now spent on social media sites [ 3 ] and visual platforms such as Instagram and TikTok increasingly dominate the cultural landscape of social media. Such visually oriented platforms are associated with higher levels of dysfunction in body image [ 4 ], while the COVID-19 pandemic has seen a rise in disordered eating behaviour [ 5 ]. Despite this, the field lacks a clear theoretical framework for understanding how social media usage heightens body image issues [ 6 ] and little research has specifically examined the impacts of TikTok based content. In this research, we sought to explore the impact of pro-anorexia TikTok content on body image satisfaction and internalisation of beauty standards for young women. The forthcoming sections of this literature review will highlight the features of social media content that may be particularly pernicious for young female users and will explore disordered eating and orthorexia in a social media context, concluding with a theoretical analysis of the relationship between social media and body image and internalisation of beauty standards, respectively.

Social media offers instant, quantifiable feedback coupled with idealised online imagery that may intersect with the value adolescents attribute to peer relationships and the sociocultural gender socialisation processes germane to this period of development, creating the “perfect storm” for young social media users, especially females [ 6 ]. In a study of 85 young, largely female eating disorder patients, a rise in awareness of online sites emphasizing thinness as beauty was evident from 2017 to 2020, with 60% of participants indicating that they knew of pro-ana websites and 22% of participants admitting to visiting them [ 7 ]. Research suggests that social media may also trigger those with extant eating disorders while simultaneously influencing healthy individuals to engage in disordered eating behaviour [ 8 ].

“Pro” eating disorder communities, hereafter referred to as “pro-ana” (pro-anorexia) communities, are a particular concern in a social media context. These communities encourage disordered eating, normalise disordered behaviours, and provide a means of connection for individuals who endorse anti-recovery from eating disorders [ 8 ]. Weight-loss tips, excessive exercise routines, and images of emaciated figures are routinely shared in these online communities [ 9 ], with extant research highlighting the association between viewing eating disorder content online and offline eating disorder behaviour [ 8 ]. Women who view pro-ana websites display increased eating disturbances, lowered body satisfaction, an increased drive for thinness, and higher levels of perfectionism when compared to women who have not viewed pro-ana content [ 10 , 11 ]. In research on adolescent girls, Stice [ 12 ] investigated the influence of exposure to media portraying the “thin-ideal” and found that perceived pressure to be thin was a predictor of increased body image dissatisfaction, which in turn led to increases in disordered eating behaviour. In similar research, Green [ 10 ] found that individuals with diagnosed eating disorders reported worsening symptoms after just 10-minutes of exposure to pro-ana content on the online platform, Tumblr.

Disordered eating #ForYou

The most downloaded social application (app) of 2021, TikTok is a social media platform that allows short-form video creation and sharing within a social media context [ 13 ]. Since its launch in 2017, TikTok has had over two billion downloads and has an estimated one billion users, the vast majority of which are children and teenagers [ 14 ]. Unlike other social media platforms where users have greater autonomy over the content generated on their homepage newsfeed, TikTok’s algorithm records data from single users and proposes videos designed to catch a user’s attention specifically, by creating a personalised “For You” page [ 15 ]. This feed will suggest videos from any creator on the platform, not just followed accounts. As such, if a user ‘interacts’ with a video, such as liking, sharing, commenting, or searching for related content, the algorithm will continue to produce similar videos on their “For You” page. The speed with which TikTok content can be created and consumed online may also be key to its impact. Any given social media user could watch more than a thousand videos on TikTok in an hour, creating a reinforcing effect that may have more impact than longer form content from a single creator [ 2 ].

Whilst the popularity of TikTok’s “For You” page has prompted global leaders in social media to build their own recommended content features, this feature remains most pronounced on TikTok. The “For You” page is the homepage of TikTok where users spend the majority of their time, compared to other social media platforms where homepages consist of a curation of content from followed accounts. Instagram’s explore page continues to emphasise established influencer culture and promote accounts of public figures or influencers with large followings. Contrastingly, TikTok’s unique algorithm makes content discoverability an even playing field, as any user’s content has the potential to reach a vast audience regardless of follower count or celebrity status. TikTok users therefore have less control over their homepage newsfeed compared to other social media platforms where users elect who they follow.

Unlike other social media platforms that implicitly showcase body ideals, TikTok contains explicit eating disorder content [ 16 ], while the “For You” page means that simply interacting with health and fitness videos can lead to unintended exposure to disordered eating content. Even seemingly benign “fitspiration” content may have psychological consequences for viewers. Beyond explicit pro-ana content, #GymTok and #FoodTok are two popular areas of content that provide a forum for users to create and consume content around their and others’ daily eating routines, weight loss transformations, and workout routines [ 2 ]. TikTok also frequently features content promoting clean eating, detox cleanses, and limited ingredient diets reflective of the current “food as medicine” movement of western culture [ 17 ], otherwise known as orthorexia. Despite efforts to ban such pro-ana related content, some videos easily circumvent controls [ 18 ], in part because many TikTok creators are non-public figures who are not liable to the backlash or cancellation that a public figure might receive for circulating socially irresponsible content.

Orthorexia: The rise of ‘healthy’ eating pathologies

Psychological analyses of eating disorders have historically focused on restrictive eating and the binge-purge cycle, however, more recently “positive” interests in nutrition have been examined. Orthorexia nervosa is characterised by a restrictive diet, ritualized patterns of eating, and rigid avoidance of foods deemed unhealthy or impure that consumes an individual’s focus [ 19 ]. Despite frequent observation of this distinct behavioural pattern by clinicians, orthorexia has received limited empirical attention and is not formally recognised as a psychiatric disorder [ 19 ]. Orthorexia and anorexia nervosa share traits of perfectionism, high trait anxiety, a high need to exert control, plus the potential for significant weight loss [ 19 ]. Termed ‘the disorder that cannot be diagnosed’ due to limited consensus around its features and the line between healthy and pathological eating practices, orthorexia mirrors the narrative of neoliberal self-improvement culture, wherein the body is treated as a site of performance and transformation.

Orthorexic restrictions and obsessions are routinely interpreted as signs of morality, health consciousness, and wellness [ 20 , 21 ]. Social media wellness influencers have played a significant role in normalising “clean [disordered] eating”. As one example, Turner and Lefevre [ 22 ] conducted an online survey of social media users following health food accounts and found that higher Instagram use was associated with a greater tendency towards orthorexia, with the prevalence of orthorexia among the study population at 49%, substantially higher than the general population (<1%). Similar health and food-related content on TikTok may provoke orthorexic tendencies among TikTok users, however, limited research has investigated orthorexic eating behaviour in the context of TikTok. The current study aims to bridge this gap in the literature around TikTok use and orthorexic tendencies. Disordered eating behaviour in the present study was measured by two separate but related constructs. ‘Restrictive’ disordered eating relates to dieting, oral control, and bulimic symptoms, whilst ‘healthy’ disordered eating constitutes orthorexic-like preoccupation with health food.

Theoretical analysis of body image and social media

An established risk factor in the development and maintenance of disordered eating behaviour is negative body image. Body image is a multidimensional construct that represents an individual’s perceptions and attitudes about their physical-self and encompasses an evaluative function through which individuals compare perceptions of their actual “self” to “ideal” images [ 23 ]. This comparison may produce feelings of dissatisfaction about one’s own body image if a significant discrepancy exists between the actual and ideal self-image [ 23 ]. Body image is not necessarily congruent with actual physique, with research demonstrating that women categorised as having a healthy body mass index (BMI) nonetheless report dissatisfaction with their weight and engage in restrictive dietary behaviours to reduce their weight [ 24 ]. In addition, body image dissatisfaction is considered normative in Western society, particularly among adolescent women [ 25 ]. This may be attributable to the constant flow of media that exposes women to unrealistic images of thinness idealized within society [ 26 ].

One theoretical framework for understanding social media’s relationship with body image is the Social Comparison Theory, proposed by Festinger [ 27 ] who suggests that people naturally evaluate themselves in comparison to others via upward or downward social comparisons. Research supports the notion that women who frequently engage in maladaptive upward appearance-related social comparisons are more likely to experience body image dissatisfaction and disordered eating [ 25 , 28 ], while visual exposure to thin bodies may detrimentally modulate one’s level of body image satisfaction [ 29 – 31 ]. In their study of undergraduate females, Engeln-Maddox [ 29 ] found that participants made upward social comparisons to images of thin models which were strongly associated with decreases in body image satisfaction and internalisation of thinness. Similarly, Tiggemann [ 32 ] found that adolescents who spent more time watching television featuring attractive actors and actresses reported an increased desire for thinness, theorised to be a result of increased social comparison to attractive media personalities.

The Transactional Model [ 33 ] extends Social Comparison Theory by emphasising the multifaceted and complex nature of social media influences on body image. This model acknowledges that individual differences may predispose a person to utilise social media for gratification, and highlights that as time spent on social media increases, so too does body image dissatisfaction [ 33 ]. In line with this, a recent review of literature by Frieiro Padín and colleagues [ 34 ] indicated that time spent on social media was strongly correlated with eating disorder psychopathologies, as well as heightened body image concerns, internalisation of the thin ideal, and lower levels of self-esteem. Time on social media also correlated with heightened body image concerns to a far greater extent than general internet usage [ 35 , 36 ].

Body image ideals are not static. The traditional ideal of rib-protruding bodies from the 90s, known colloquially as “heroin chic”, have recently shifted to a celebration of the “slim-thicc” figure, consisting of a cinched, flat waist with curvy hips, ample breasts, and large behinds [ 37 ]. The “slim-thicc” aesthetic allows women to be bigger than previous body ideals, yet this figure is arguably more unattainable than the thin-ideal as surgical intervention is commonly needed to achieve it, depending on genetics and body type. The idealisation of the “slim-thicc” figure is highlighted by the “Brazilian butt lift” (BBL), a potentially life-threatening procedure that is nonetheless the fastest growing category of plastic surgery, doubling in growth over the past five years, despite the life-threatening potential of the procedure [ 38 ]. Research suggests that the slim-thicc ideal is no less damaging nor threatening of body image than the thin-ideal. Indeed, in experimental research on body ideals, McComb and Mills [ 39 ] found that the greatest body dissatisfaction levels in female undergraduate students were observed among those exposed to imagery of the slim-thicc physique, relative to that exhibited by those exposed to the thin-ideal and fit-ideal physique, as well as the control condition.

Recent body ideals have also favoured muscular thin presentations, considered to represent health and fitness as evident in the “#fitspiration” Instagram hashtag that features over 65 million images [ 40 ]. Fitspiration has the potential to positively influence women’s health and wellbeing by promoting exercise engagement and healthy eating, yet various content analyses of fitspiration images highlight aspects of fitspiration that warrant concern [see 40 , 41 ]. Notably, fitspiration typically showcases only one body type and women whose bodies do not meet this standard may experience body dissatisfaction [ 40 ], while the gamification of exercise, such as receiving likes for every ten sit-ups, segues with the intensive self-control and competitiveness that often underpins eating disorders and eating disorder communities [ 1 ].

In recent experimental research, Pryde and Prichard [ 42 ] examined the effect of exposure to fitspiration TikTok content on the body dissatisfaction, appearance comparison, and mood of young Australian women. Viewing fitspiration TikTok videos led to increased negative mood and increased appearance comparison but did not impact body dissatisfaction. This finding contradicts previous research and may be due to fitspiration content showcasing body functionality rather than aesthetic, which may lead to positive outcomes for viewers. The fitspiration content used by Pryde and Prichard [ 42 ] did not contain the harmful themes regularly found in other forms of fitspiration content. Appearance comparison was significant in the relationship between TikTok content and body dissatisfaction and mood, suggesting that this may be a key mechanism through which fitspiration content leads to negative body image outcomes and supporting the notion that fitspiration promotes a focus on appearance rather than health.

Body image dissatisfaction among women is associated with co-morbid psychological disturbances and the development of disordered eating behaviours [ 43 , 44 ]. A large body of research indicates that higher levels of both general and appearance-related social comparison are associated with disordered eating in undergraduate populations [ 10 , 28 , 45 – 48 ]. As one example, Lindner et al. [ 46 ] investigated the impact of the female-to-male ratio of college campuses on female students’ engagement in social comparison and eating pathology. Their findings lend support to the Social Comparison Theory, indicating that the highest levels of eating pathology and social comparison were found among women attending colleges with predominantly female undergraduate populations. A strong relationship was also found between eating pathology and engagement in appearance-related social comparisons independent of actual weight. Lindner et al. [ 46 ] surmised that these results suggest social comparison and eating pathology behaviours are due to students’ perceptual distortions of their own bodies, potentially fostered by pressures exerted from peers to be thin.

Similarly, Corning et al. [ 45 ] investigated the social comparison behaviours of women with eating disorder symptoms and their asymptomatic peers. Results illustrated that a greater tendency to engage in everyday social comparison predicted the presence of eating disorder symptoms, while women with eating disorder symptoms made significantly more social comparisons of their own bodies. Such findings are supported by subsequent research, with Hamel et al. [ 28 ] finding that adolescents with a diagnosed eating disorder engaged in significantly more body-related social comparison than adolescents diagnosed with a depressive disorder or no diagnosis. Body-related social comparison was also significantly positively correlated with disordered eating behaviours. While extant research has focused upon social comparison as it has occurred through traditional media outlets, less research has investigated the facilitation of social comparison through social media platforms, particularly contemporary platforms such as TikTok.

Theoretical analysis of internalisation processes and social media

The extent to which one’s body image is impacted by images and messages conveyed by the media is determined by the degree to which these images and messages are internalised. Some may argue that social media platforms are distinct from what occurs in “real” life, creating fewer opportunities for internalisation to occur. Yet as Pierce [ 2 ] argues, platforms such as TikTok create their own realities, allowing users to explore their identities, form relationships, engage with culture and world events, and even develop new patterns of speech and writing. TikTok trends commonly infiltrate society, underscoring the impact of social media on life beyond the online world and thus a sociocultural analysis of TikTok is warranted. Sociocultural theories suggest that society portrays thinness as the ideal body shape for women, resulting in an internationalisation of the “thin is good” assumption for women. This in turn results in lowered body image satisfaction and other negative outcomes [ 43 ]. The significance of social influences, including the role of family, peers, and the media, is emphasised by sociocultural theory, with individuals more likely to internalise the thin ideal when they encounter pressuring messages that they are not thin enough from social influences [ 48 ]. Within this theoretical approach, an individual’s degree of thin ideal internalisation is theorised to depend on their acceptance of socially defined ideals of attractiveness and is reflected in their engagement in behaviours that adhere to these socially defined ideals [ 49 ].

Building on this, the tripartite influence model suggests that disordered eating behaviours arise due to pressure from social agents, specifically media, family, and peers. This pressure centres on conforming to appearance ideals and may lead to engagement in social comparison and the internalisation of thin ideals [ 48 ]. This is relevant in a digital context given social media provides endless opportunities for individuals to practice social comparison and for many users, social comparison on TikTok is peer-based as well as media-based. According to the tripartite model, social comparisons have been consistently associated with a higher degree of thin ideal internalisation, self-objectification, drive for thinness, and weight dissatisfaction [ 50 ]. Furthermore, and in contrast to traditional media where social agents are mainly models, celebrities, and movie stars, social agents on social media can include peers, friends, family, and individuals who have a relationship with the individual. Social media content generated by “everyday” people, rather than super models or movie stars, may result in comparisons that are more horizontal in nature. This is particularly evident on TikTok where content creators are rarely famous before creating a TikTok account, often remain micro-influencers after achieving some notoriety, and are usually around the same age as those viewing their content.

Pressure to be thin from alike peers may have a particularly pronounced impact on one’s degree of internalisation of the thinness ideal. Indeed, Stice et al. [ 51 ] found that after listening to young thin women complain about “feeling fat”, their adolescent participant sample reported increased body image dissatisfaction, suggesting that pressure from peers perpetuates the thinness ideal, leading to internalisation of the ideal and subsequent body dissatisfaction. Similarly, it was found that adolescent females were more likely to engage in weight loss behaviour if a high portion of peers with a similar BMI were also engaging in these behaviours, illustrating that appearance pressure exerted by alike peers may result in thin-ideal internalisation and engagement in weight loss behaviours to control body weight and shape [ 52 ]. Such findings raise questions around whether those most similar to us have the greatest impact upon thin-ideal internalisation, body image dissatisfaction, and disordered eating behaviours.

In further support for the tripartite influence model, research by Thompson et al. [ 48 ] indicates that the ideals promoted through social media trends are internalized despite being unattainable, resulting in body image dissatisfaction and disordered eating behaviour. Similarly, Mingoia et al. [ 53 ] found a positive association between the use of social networking sites and thin ideal internalisation in women, indicating that greater use of social networking sites was linked to significantly higher internalisation of the thin ideal. Interestingly, the use of appearance-related features (e.g., posting or viewing photographs or videos) was more strongly related with internalisation than the broad use of social networking sites (e.g., writing status’, messaging features) [ 53 ]. Correlational and experimental research alike has demonstrated that thin ideal internalisation is related to body image dissatisfaction and leads to expressions of disordered eating such as restrictive dieting and binge-purge symptoms [ 31 , 48 , 54 , 55 ]. Subsequent expressions of disordered eating may be seen as an attempt to control weight and body shape to conform to societal beauty standards of thinness [ 51 ].

This sociocultural perspective is exemplified by Grabe et al’s. [ 54 ] meta-analysis of research on the associations between media exposure to women’s body dissatisfaction, internalisation of the thin ideal, and eating behaviours and beliefs, illustrating that exposure to media images propagating the thin ideal is related to and indeed, may lead to body image concerns and increased endorsement of disordered eating behaviours in women. Similarly, Groesz et al. [ 55 ] conducted a meta-analysis to examine experimental manipulations of the thin beauty ideal. They found that body image was significantly more negative after viewing thin media images than after viewing images of average size models, plus size models, or inanimate objects. This effect size was stronger for participants who were more vulnerable to activation of the thinness schema. Groesz et al. [ 55 ] conclude that their results align with the sociocultural theory perspective that media promulgates a thin ideal that in turn provokes body dissatisfaction.

Current research

Existing research has established the relationship between body image dissatisfaction and disordered eating behaviours and social media platforms such as Instagram and Twitter. The unique implications of the TikTok ‘For You Page’, as well as the dominance of peer-created and explicit disordered eating content on TikTok suggests that the influence of this platform warrants specific consideration. This study adds to extant literature by utilising an experimental design to examine the influence of exposure to pro-ana TikTok content on body image and internalisation of societal beauty standards. A cross-sectional design was used to investigate the effect of daily TikTok and the development of disordered eating behaviours. Although body image disturbance and eating disorders are not limited to women, varying sociocultural factors have been implicated in the development of disordered eating behaviour in men and women [ 45 ], while issues facing trans people warrant specific consideration beyond the scope of this study, therefore the present sample contains only female-identifying participants.

Aims and hypotheses.

The current study aimed to investigate the impact of pro-ana TikTok content on young women’s body image satisfaction and internalisation of beauty standards, as well as exploring daily TikTok use and the development of disordered eating behaviour. First, in line with the cross-sectional component of the study, it was hypothesized that women who spend greater time on TikTok per day would report significantly more disordered eating behaviour than women who spend low amounts of time on TikTok per day. Second, it was hypothesized that women in the pro-ana TikTok group would report a significant decrease in body image satisfaction state following exposure to the pro-ana content compared to women in the control group. Third, it was hypothesized that women in the pro-ana Tik Tok group would report increased internalisation of societal beauty standards following exposure to pro-ana TikTok content compared to women in the control group.

Participants

Participants in the current study included 273 women aged between 18 to 28 years sourced from the general population of TikTok users. The predominant country of residence of the sample was Australia, with 15 participants indicating they currently reside outside of Australia. Of the remaining data relating to the two conditions of the study, 126 participants were randomly allocated into the experimental condition, and 147 participants were randomly allocated into the control condition. Snowball sampling was used to recruit participants through social media, online survey sharing platforms, and word-of-mouth, with first-year University students targeted for recruitment by offering class credit in return for participation. Participants could withdraw their consent at any time by exiting the study prior to completion of the survey.

The current study employed a questionnaire set that included a demographic questionnaire, and five scales measuring disordered eating behaviour, body satisfaction, and internalisation of societal beauty standards, as well as perfectionism, the latter of which was not examined in the present study.

Demographic questionnaire.

The demographic questionnaire required participants to answer a series of questions relating to their gender, age, relationship status, ethnicity, country of residence, TikTok usage, and exercise routine. A screening question redirected non-female-identifying persons from the study. Responses to the TikTok usage items were examined cross-sectionally with responses on the EAT-26 and ORTO15 used to examine the influence of daily TikTok use and the presentation of disordered eating behaviours.

Eating attitudes test.

The Eating Attitudes Test (EAT-26, [ 56 ]) is a short form of the original 40-item EAT scale [ 57 ] which measures symptoms and concerns characteristic of eating disorders. The 26-item short-form version of the EAT was utilised in the present study due to its established reliability and validity, and strong correlation with the EAT-40 [ 56 ].

Responses to the 26-items are self-reported using a 6-point Likert scale ranging from Always (3) to Never (0) [ 56 ]. The EAT-26 consists of three subscales including dieting, bulimia and food preoccupation, and oral control. Five behavioural questions are included in Part C of the EAT-26 to determine the presence and frequency of extreme weight-control behaviours including binge eating, self-induced vomiting, laxative usage, and excessive exercise [ 56 ]. Higher scores indicate greater disordered eating behaviour, and those with a total score of 20 or greater are, in clinical contexts, typically highlighted as requiring further assessment and advice of a mental health professional [ 56 ].

Internal consistency of the EAT-26 was established in initial psychometric studies which reported a Cronbach’s alpha of.85 [ 58 ]. For the current study, the Cronbach’s alpha = .91. Previous research has also demonstrated that the EAT-26 has strong test-retest reliability (e.g., 0.84) [ 59 ], as well as acceptable criterion-related validity for differentiating between eating disorder populations and non-disordered populations [ 56 ]. In the current study, the EAT-26 was used to measure disordered eating behaviour, and the cut-off score of 20 and above was adopted to categorise increased disordered eating behaviour. Given how this construct is measured, from this point forward the present study will refer to EAT-26 responses as ‘restrictive’ type disordered eating.

The ORTO-15 is a 15-item screening measure that assesses orthorexia nervosa risk through questions regarding the perceived effects of eating healthy food (e.g. “Do you think that consuming healthy food may improve your appearance?”), eating habits (e.g. “At present, are you alone when having meals?”), and the extent to which concerns about food influence daily life (e.g. “Does the thought of food worry you for more than three hours a day?”) [ 19 ]. Responses are self-reported using a 4-point Likert scale ranging from always , often , sometimes , or never . Individual items are coded and summed to derive a total score. Donini et al. [ 60 ] established a cut off total score of 40; scores below 40 indicate orthorexia behaviours, whilst scores 40 or above reflect normal eating behaviour. This cut off score was determined by Donini et al. [ 60 ] as their results revealed the ORTO-15 demonstrated good predictive capability at the threshold of 40 compared to other potential threshold values.

Although the ORTO-15 is the most widely accepted screening tool to assess orthorexia risk, it is still only partially validated [ 61 ], and inconsistencies of the measures’ reliability and validity exist in current literature. For example, Roncero et al. [ 62 ] estimated that the reliability of the ORTO-15 using Cronbach’s alpha was between 0.20 and 0.23, however, after removing certain items, the reliability coefficients were between 0.74 and 0.83. Contrastingly, Costa and colleagues’ [ 63 ] review of current literature surrounding orthorexia suggested adequate internal consistency (Cronbach’s alpha = 0.83 to 0.91) with all 15-items.

In the present study, a reliability analysis revealed unacceptable reliability for the ORTO-15 (α = .24). Principal components factor analysis identified two factors within the ORTO-15, one relating to dieting and the other to preoccupation with health food. Separate reliability analyses were performed on the items that comprised these two factors and the diet-related items did not have acceptable reliability (α = -.40), whilst the health food-related items bordered on acceptable reliability at α = .63. Consequently, only the health food-related items were retained in the current study following consideration of Pallant’s [ 64 ] assertion that Cronbach alpha values are sensitive to the number of items on a scale and it is therefore common to obtain low values on scales with less than ten items. Pallant [ 64 ] notes that in cases such as this, it is appropriate to report the inter-item correlation of the items, while Briggs and Cheek [ 65 ] advise an optimal range for the inter-item correlation between.2 to.4, with the health food-related items in the current study obtaining an inter-item correlation of.25. Throughout this study, the construct measured by these ORTO-15 items will be referred to as ‘healthy’ type disordered eating to reflect this obsessive health food preoccupation and differentiate between the two disordered eating dependent variables measured in the current study.

Body image states scale.

The Body Image States Scale (BISS) by Cash and colleagues [ 66 ] is a six-item measure of momentary evaluative and affective experiences of one’s own physical appearance. The BISS evaluates the following aspects of current body experience: dissatisfaction-satisfaction with overall physical appearance; dissatisfaction-satisfaction with one’s body size and shape; dissatisfaction-satisfaction with one’s weight; feelings of physical attractiveness-unattractiveness; current feelings about one’s looks relative to how one usually feels; and evaluation of one’s appearance relative to how the average person looks [ 66 ]. Participants responded to these items using a 9-point Likert-type scale which is presented in a negative-to-positive direction for half of the items, and a positive-to-negative direction for the other half [ 66 ]. Respondents were instructed to select the statement that best captured how they felt “ right now at this very moment ”. A total BISS score was calculated by reverse-scoring the three positive-to-negative items, summing the six-items, and finding the mean, with higher total BISS scores indicating more favourable body image states.

During the development and implementation of the BISS, Cash and colleagues [ 66 ] report acceptable internal consistency and moderate stability over time, an anticipated outcome due to the nature of the BISS as a state assessment tool. The BISS was also appropriately correlated with a range of trait measures of body image, highlighting its convergent validity [ 66 ]. Cash and colleagues [ 66 ] also report that the BISS is sensitive to reactions in positive and negative situational contexts and has good construct validity. An acceptable Cronbach’s alpha coefficient of.88 was obtained in the current study.

Sociocultural Attitudes Towards Appearance Questionnaire—4.

The Sociocultural Attitudes Towards Appearance Questionnaire– 4 (SATAQ-4) [ 67 ] is a 22-item self-report questionnaire that assesses the influence of interpersonal and sociocultural appearance ideals on one’s body image, eating disturbance, and self-esteem. Ratings are captured on a 5-point Likert scale which asks participants to specify their level of agreement with each statement by choosing from 1 ( definitely disagree ) through to 5 ( definitely agree ), with higher scores indicative of greater pressure to conform to, or greater internalisation of, interpersonal and sociocultural appearance ideals [ 67 ]. The five subscales of the SATAQ-4 measure: internalisation of thin/low body fat ideals, internalisation of muscular/athletic ideals, influence of pressures from family, influence of pressure from peers, and influence of pressures from the media [ 67 ]. For the purposes of the present study, the questions from the media pressure subscale were modified to enquire specifically about social media rather than traditional forms of media.

Across all samples in Schaefer et al’s. [ 67 ] study, the internal consistency of the five SATAQ-4 subscales is considered acceptable to excellent, with Cronbach’s alpha scores between 0.75 and 0.95. These subscales also displayed good convergent validity with other measures of body satisfaction, eating disorder risk, and self-esteem [ 67 ]. Pearson product-moment correlations between the SATAQ-4 subscales and convergent measures revealed medium to large positive associations with eating disorder symptomology, medium negative associations with body satisfaction, and small negative associations with self-esteem [ 67 ]. A Cronbach’s alpha of.87 was obtained in the present study, demonstrating acceptable internal consistency.

Ethical approval for the present study was granted by the Charles Sturt University Human Research Ethics Committee (Approval number H21155) prior to data collection. Participants were directed to the study via an online link to QuestionPro where they were provided an explanation of the study, their rights, and contact details of relevant support services if they were to become distressed. Participants gave informed consent by clicking on a link that read, “I consent to participate” at the beginning of the survey and then again through the submission of their completed survey. Any incomplete responses were not included in the dataset. Data collection commenced on the 30 th of July 2021 and ceased on the 1st of October 2021. In line with the cross-sectional and descriptive aspects of the research, participants were asked demographic questions about their gender, age, relationship status, ethnicity, country of residence, TikTok usage, and exercise habits. Participants then completed the experimental set in the following order: BISS (pre-test), SATAQ-4 (pre-test), EAT-26, ORTO-15, Experimental intervention (control or experimental TikTok video condition), SATAQ-4 (post-test), BISS (post-test), and debrief. All questionnaires presented to each participant were identical. Measures were not randomised to ensure that body image and internalisation were assessed at both pre- and post-test to evaluate the experimental manipulation.

Participants were randomly allocated to one of two conditions: experimental (pro-ana TikTok video) or control (“normal” TikTok video). Participants allocated to the experimental condition watched a compilation of TikTok videos containing explicit disordered eating messages such as young women restricting their food, displaying gallows humour about their disordered eating behaviour, starving themselves, and providing weight loss tips such as eating ice cubes and chewing gum to curve hunger. Participants in the experimental condition were also exposed to more implicit body image ideals typical of fitspiration-style content. This included thin women displaying their abdomens, cinched waists, dancing in two-piece swimwear, along with workout and juice cleanse videos promising fast weight loss. Participants in the control condition viewed a compilation of TikTok videos containing scenes relating to nature, cooking and recipes, animals, and comedy. After viewing the 7- to 8-minute TikTok video, all participants completed measures of internalisation and body satisfaction again to assess the influence of either the pro-ana TikTok video or the normal TikTok video. The debrief statement made explicit to participants the rationale of the study and explained the non-normative content of the videos shown to the experimental group. A small financial incentive was offered via a prize draw of five vouchers.

Statistical analysis

The data from QuestionPro was collated and analysed using IBM SPSS Statistics software, Version 28. All measures and manipulations in the study have been disclosed, alongside the method of determining the final sample size. No data collection was conducted following analysis of the data. Data for this study is available via the Figshare data repository and can be accessed at https://doi.org/10.6084/m9.figshare.25756800.v1 . This study was not preregistered. Sample size was determined before any data analysis. A priori power analyses were conducted using G*Power to determine the minimum sample sizes required to test the study hypotheses. Results indicated the required sample sizes to achieve 90% power for detecting medium effects, with a significance criterion of α = 0.05, were: N = 108 for the mixed between-within subjects ANOVAs and N = 232 for the one-way between groups ANOVAs. According to these recommendations, adequate statistical power was achieved. All univariate and multivariate assumptions were checked and found to be met. All scales and independent variables were normally distributed.

The analysis of the current study including data screening processes, descriptive statistics, and hypothesis testing will be presented in this section. Hypothesis testing began with two separate mixed between-within subjects analysis of variance models (ANOVAs) to examine the impact of the experimental manipulation on the independent variables of body image and internalisation of appearance ideals and pressure. Finally, the effect of time spent using TikTok daily on restrictive and ‘healthy’ disordered eating behaviour was explored cross-sectionally using two separate one-way between-subjects ANOVAs.

Data screening

Prior to statistical analysis, data were screened for entry errors and missing data. Of the 838 participants who initially consented to participate in the survey, 555 responses were insufficiently complete for data analysis. As participants were permitted to withdraw their consent by exiting the online survey, these results were excluded from all subsequent analyses. Of those that did not complete the study, the majority withdrew during the BISS (pre-test) and the ORTO-15, suggesting that these participants potentially experienced discomfort or distress when asked to reflect on their appearance and their eating behaviours. Of the completed responses, nine were excluded due to not meeting the study’s stated age eligibility and another case was excluded due to disclosure of a previous eating disorder diagnosis. The remaining data set comprised of 273 participants.

Descriptive statistics

Demographic characteristics..

In the current sample, 50% of participants reported being currently single and most participants (83%) were Caucasian, with 71% of participants indicating that they spent up to two hours per day using TikTok. Further demographic information is provided in Table 1 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0307597.t001

#ForYou: TikTok consumption demographics.

Participants in the current study reported that entertainment (75%), fashion (59%), beauty/skincare (54%), cooking/recipes (51%) and life hacks/advice (51%) content frequently occurred on their For You page. Largely in keeping with this, participants reported experiencing the most enjoyment from viewing entertainment (84%), life hacks/advice (57%), home renovation (56%), recipes/cooking (56%), and fashion (54%) content on their For You page.

In the current sample, 64% of participants reported being exposed to disordered eating content via their For You page. Only 15% of participants had not been exposed to any negative content themes. Further descriptive For You page content information is displayed below in Table 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0307597.t002

Notably, 43% of the participant sample were frequently exposed to fitness and sports related content and the same percentage of the sample enjoyed seeing this content, suggesting that content broadly aligned with #fitspiration was consumed and appreciated by nearly half of participants. Concerningly, 40–60% of participants had been exposed to negative TikTok content via the For You Page, with content ranging from self-harm and suicidality to violence and illegal activity. No data was collected on the specifics of this content, however, and it is possible that some “negative” content may be framed from a proactive, preventative perspective, and this warrants further consideration.

Hypothesis testing: Cross-sectional analysis

Hypothesis 1: daily tiktok use and disordered eating behaviour..

To test the cross-sectional analysis of this study, two separate one-way between-groups ANOVAs were conducted to explore the impact of daily amount of TikTok use on ‘healthy’ disordered eating and restrictive disordered eating behaviour. This was necessary as time on TikTok was measured categorically. Participants were divided into four groups according to their average daily time spent using TikTok (Low use group: 1 hour or less; Moderate use group: 1–2 hours; High use group: 2–3 hours; Extreme use group: 3+ hours). Homogeneity of variance could be assumed for each ANOVA as indicated by non-significant Levene’s Test Statistics.

There was no statistically significant difference at the p < .05 level in ORTO15 scores for the four TikTok usage groups: F (3, 269) = .38, p = .78, indicating that ‘healthy’ disordered eating did not significantly differ across women who use TikTok for different periods of time per day. The effect size, calculated using eta squared, was.004, which is considered small in Cohen’s [ 68 ] terms. This small effect size is congruent with the non-significant finding.

The second ANOVA measuring differences among EAT-26 scores across the four TikTok usage groups also yielded a non-significant result: F (3, 269) = 1.21, p = .31. Eta squared was calculated as.01, representing a small effect size [ 68 ] consistent with this non-significant result. The means and standard deviations of the four TikTok usage groups across dependent variables of ‘healthy’ and restrictive disordered eating, as measured by the ORTO15 and the EAT-26 respectively, are displayed in Table 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0307597.t003

Hypothesis testing: Experimental analyses

Hypothesis 2: body image satisfaction across groups from pre-test to post-test..

To evaluate the effect of the experimental intervention on body image, a 2 x 2 mixed between-within subjects ANOVA was conducted with condition (experimental vs control) as the between subjects factor and time (pre-manipulation vs post-manipulation) as the within subjects factor. All assumptions were upheld, including homogeneity of variance-covariance as indicated by Box’s M ( p >.001) and Levene’s ( p >.05) tests [ 64 ].

The interaction between condition and time was significant, Wilks’ Lambda = .98, F (1, 271) = 6.83, p = .009, partial eta squared = .03, demonstrating that the change in body image scores from pre-manipulation to post-manipulation was significantly different for the two groups. The body image satisfaction scores for women in both conditions decreased from pre-manipulation to post-manipulation. As anticipated, participants in the experimental condition reported a greater decrease in body image satisfaction than women in the control condition (see Table 4 ). This interaction effect is displayed in Fig 1 .

thumbnail

https://doi.org/10.1371/journal.pone.0307597.g001

thumbnail

https://doi.org/10.1371/journal.pone.0307597.t004

Although not consequential to the testing of the experimental manipulation, statistically significant main effects were also found for time, Wilks’ Lambda = .89, F (1, 271) = 32.99, p = < .001, partial eta squared = .109 and condition, F (1, 271) = 4.42, p = .036, partial eta squared = .016. The means and standard deviations of these main effects are displayed in Table 4 .

Hypothesis 3: Internalisation of societal beauty standards across groups from pre-test to post-test.

A second 2 x 2 mixed between-within subjects ANOVA was conducted to investigate the effect of the experimental manipulation on participants’ internalisation scores. All assumptions for the mixed model ANOVA were met with no violations.

A statistically significant interaction was found between group condition and time, Wilks’ Lambda = .97, F (1, 271) = 8.16, p = .005, partial eta squared = .029. This significant interaction highlights that the change in degree of internalisation at pre-manipulation and post-manipulation is not the same for the two conditions. Interestingly, the internalisation scores for women in the control group decreased from pre-manipulation to post-manipulation, whilst as anticipated, internalisation scores for women in the experimental group increased following exposure to the manipulation (see Table 5 ). This interaction is displayed in Fig 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0307597.g002

thumbnail

https://doi.org/10.1371/journal.pone.0307597.t005

No statistically significant main effects were found for time, Wilks’ Lambda = .987, F (1, 271) = 3.59, p = .059, partial eta squared = .013 or condition, F (1, 271) = 2.65, p = .104, partial eta squared = .010. The means and standard deviations of internalisation scores for each condition at pre-manipulation and post-manipulation are displayed below in Table 5 .

The current study investigated the effect of TikTok content on women’s body image satisfaction and degree of internalisation of appearance ideals, and whether greater TikTok use contributed to increased disordered eating behaviour. In support of the hypotheses, exposure to pro-ana TikTok content significantly decreased participants’ body image satisfaction and increased participants’ degree of internalisation of appearance ideals. The hypothesis that greater daily TikTok use would contribute to increased disordered eating behaviour was not supported, as no statistically significant differences in restrictive disordered eating or ‘healthy’ disordered eating were found between the low, moderate, high, and extreme daily TikTok use groups.

Cross-sectional findings

Daily tiktok use and disordered eating behaviour..

Contrary to expectations, differences among groups on measures of restrictive disordered eating and ‘healthy’ disordered eating did not reach statistical significance. The proposed hypothesis that greater daily TikTok usage would be associated with disordered eating behaviour and attitudes was therefore unsupported. Despite lacking statistical support, participants categorised in the ‘high’ and ‘extreme’ daily TikTok use groups reported an average EAT-26 score of 18.16 and 19.09, respectively. Considering that an EAT-26 cut-off of ≥ 20 indicates potential clinical psychopathology, this mean score illustrates that exposure to TikTok content for two or more hours per day may contribute to a clinical degree of restrictive disordered eating.

The failure of the present study to detect any significant differences in disordered eating behaviours among participants with different TikTok daily usage does not align with the Transactional Model [ 33 ]. According to this model, risk factors such as low self-esteem and high thin ideal internalisation may predispose an individual to seek gratification via social media, resulting in body dissatisfaction and negative affect. The Transactional Model therefore proposes that a positive correlation exists between time spent on social media and body image dissatisfaction. Our findings also do not align with the conclusions Frieiro Padín et al. [ 34 ] drew from their review of the literature, in which a strong connection was identified between time on social media and heightened body image concerns and internalisation of the thin ideal, as well as eating disorder psychopathologies, though a distinction in outcome measures must be noted.

Based on the aforementioned sociocultural theory and previous research [see 28 , 43 , 48 ], it was assumed that increased body dissatisfaction as a result of increased time spent on social media (as stipulated by the Transactional Model), would lead to greater disordered eating behaviour. However, this was not supported statistically in the data. As postulated by Culbert et al. [ 69 ], disordered eating behaviour may instead only be a risk of media exposure if individuals are prone to endorse thin-ideals. Individuals in the present study that reported ‘high’ and ‘extreme’ daily TikTok use may have felt satisfied with their bodies and experienced lower thin-ideal internalisation. This could have potentially buffered the negative effect of greater TikTok content exposure and accounted for the lack of significant differences in disordered eating behaviour between groups. The quantity of TikTok consumption remains a pertinent question for disordered eating behaviour. As per the present study’s brief experimental manipulation, findings suggest that high frequency of daily TikTok use does not necessarily contribute to greater disordered eating behaviour than short exposures to this content.

Content presented to the pro-ana TikTok group included a mix of explicit and implicit pro- eating disorder messages as well as fitspiration content. Fitspiration content presented in the current study included workout videos to achieve a “smaller waist” and “toned abs” where female creators with slim, toned physiques sporting activewear took viewers through a series of exercises, advising viewers that they would “see results in a week”. In the present study, diet-related fitspiration content presented included the concoction of juices to “get rid of belly fat” and advice on the best “diet for a small waist” which requires avoidance of all meat, dairy, junk food, soda, and above all, to make “no excuses”. Fitspiration style content in the current study totalled one-minute, compared to disordered eating themes which totalled six minutes. The integration of these various types of content, although reflective of the For You function in TikTok, impeded our ability to determine the singular impact of fitspiration or disordered eating content, respectively, on body image and internalisation of societal beauty standards, but did reflect social media as it is consumed beyond experimental research settings.

Experimental findings

Tiktok and body image states..

The hypothesis that women exposed to pro-ana TikTok content would experience a significant decrease in body image compared to women who viewed the control TikTok content was supported. The present study found a significant interaction effect of body image between group condition (control vs experimental) and time period (pre-manipulation vs post-manipulation), as well as significant main effects. It is important to note that the statistic of interest in evaluating the success of the experimental manipulation is the interaction effect, thus main effects must be interpreted secondarily and with caution [ 64 ]. Women in the experimental group reported significantly lower body image satisfaction after exposure to the pro-ana TikTok content and compared to women who viewed the control content. This finding corroborates Festinger’s [ 27 ] Social Comparison Theory that posits people naturally evaluate themselves in comparison to others. Exposure to the pro-ana TikTok content, consisting of various thin bodies and messaging around weight loss, may have provided the opportunity for women to engage in maladaptive upward social comparisons, resulting in reduced body image satisfaction. The present study upholds previous findings of Engeln-Maddox, Tiggemann, McComb and Mills, and Gibson [ 29 , 32 , 39 , 70 ] who suggest that visual exposure to thin bodies may adversely affect one’s level of body image satisfaction and extends this research by replicating this finding in the context of a contemporary media platform, TikTok, and by utilising an experimental design.

Contradicting the present study and previous research, Pryde and Prichard [ 42 ] found no significant increase in young women’s body dissatisfaction following exposure to fitspiration TikTok content. A potential explanation for this finding is that the performance of physical movements captured in fitspiration videos may shift the focus of viewers from aesthetics to functionality, highlighting physical competencies and capabilities which has been shown to improve body image satisfaction in young women [ 71 ]. Pryde and Prichard’s [ 42 ] fitspiration content did not include typically occurring harmful themes as the present study did, potentially reducing the negative implications for body image satisfaction of exposure to such content in real world contexts.

Interestingly, women in the control group also reported a statistically significant decrease in body image satisfaction after viewing the neutral TikTok content, a finding that underscores the possible complexity of social media’s influence on body image, as identified in research by Huülsing [ 72 ]. This is an unexpected finding, as the TikTok content displayed to the control group was selected specifically to be unrelated to appearance ideals and pressures. One possible reason for this result is the repetition of administration of the BISS within a short time period. Completing the BISS twice may have caused participants to focus more attention on their body appearance than usual, resulting in more critical appraisals regardless of the experimental stimuli to which they were exposed. This notion aligns with previous research that found focusing on the appearance of body was associated with lower body image satisfaction, whereas focus on the function of the body was associated with more positive body image states [ 71 ].

One potential explanation for this finding is that the control group stimuli was contaminated and produced an unintentional effect on body image scores. Two-minutes of footage within the seven-minute control group TikTok compilation presented the human body including legs, arms, and hands. Although this body-related content was neutral in nature, it may be that even ‘harmless’ representations of the human body are sufficient to elicit a social comparison response in participants or in some capacity, reinforce the #fitspiration motifs commonly depicted on TikTok [ 1 ], therefore impacting body image scores at post-manipulation. This possible explanation has implications for TikTok use and women’s body image, as it suggests that viewing even benign content of human bodies for less than 10-minutes can have an immediate detrimental impact on body image states, even when this content is unrelated to body dissatisfaction, thinness, or weight loss. Furthermore, although a statistically significant body image decrease was detected in the control group, this finding must be interpreted with caution due to the significant interaction effect obtained.

TikTok and internalisation of societal beauty standards.

In accordance with the hypothesis, women in the experimental group reported a significant increase in their degree of internalisation of appearance ideals following exposure to pro-ana TikTok content. Women in the experimental group also reported significantly greater internalisation of appearance ideals than women in the control group. Conversely to the experimental group, internalisation scores of the control group decreased after viewing the neutral TikTok content. These findings are in line with the sociocultural theory, as women reported increased internalisation of societal beauty standards following exposure to media content explicitly and implicitly portraying the thinness ideal. The present study supports Mingoia et al’s. [ 53 ] meta-analysis, which yielded a positive association between social networking site use and the extent of internalisation of the thin ideal and furthers this notion by replicating the finding with TikTok specifically and utilising an experimental design.

In the current study, participants were subject to a single brief exposure of pro-ana TikTok content, whereas most of the sample indicated that their TikTok use was up to two hours per day. This suggests that the degree of internalisation of appearance ideals in participants lives outside of the experiment are likely to be much greater. Mingoia et al. [ 53 ] also found that the use of appearance-related features on social networking sites, such as posting and viewing photos and videos, demonstrated a stronger relationship with the internalisation of the thin ideal than the use of social networking features that were not appearance-related, such as messaging and writing status updates. As TikTok is a video sharing app and most of its content generally features full-body-length camera shots rather than a face or head shot, this finding suggests that TikTok users could potentially internalise body-related societal standards to a greater extent than users of other social media apps that typically feature head shots.

The finding that women internalised societal beauty standards to a greater degree after being exposed to pro-ana TikTok content corroborates the sociocultural theory’s emphasis of the significance of social influences in internalisation. TikTok users may be exposed to all three social influences (i.e., media, peers, and family) simultaneously on a single platform which may encourage internalisation of appearance-ideals in a more profound manner than any of these three influences in isolation. One point of difference between TikTok and other social media apps is that much content on the app is generated by “ordinary” individuals, rather than supermodels or celebrities. This enables blatantly insidious and diet-related content to circulate the app with less policing and scrutiny compared to content produced by an influencer or celebrity who may be more likely to be criticised or cancelled for socially irresponsible messaging and also provides the opportunity for more horizontal social comparisons and peer-to-peer style interactions rather than upward social comparisons.

Indeed, in their study of American teens, Mueller et al. [ 52 ] identified that girls were especially likely to engage in weight loss behaviour if a high proportion of girls with a similar BMI were also engaging in weight loss behaviours. This indicates that internalisation was strongest when appearance-ideals were promoted by alike peers. Due to the fact that much pro-ana TikTok content is created by young women, Mueller et al’s. [ 52 ] finding has problematic implications for the young female users of TikTok, in that harmful diet-related messages could be internalised to a greater extent on TikTok than on other platforms and potentially lead to body image disturbances, disordered eating behaviour, and other negative outcomes among young women.

General discussion

The findings of the current study are important but must also be understood within the broader context of participant’s daily lives beyond their participation in this study. Everyday female-identifying individuals are exposed to a multitude of different sources of information from which body image related stimuli can be drawn. The present study’s experiment was not conducted in a controlled environment due to its online nature, therefore researchers did not have the ability to assess and control for other pieces of body image-related information that participants might have consumed prior to participation that may have been salient for their body image. Further research is required to identify how sustained a change in body image states as measured by the BISS may be over time.

The findings of this study provide some insights into how social media influences disordered eating behaviour and mental health; a theoretical gap in the literature that Choukas-Bradley et al. [ 6 ] highlight as holding back research in this domain. In particular, the findings of the current study indicate that short periods of exposure to disordered TikTok content have an effect, while the high-range EAT-26 scores observed for those who engaged with TikTok for two or more hours a day also raise questions about the duration of exposure. Nonetheless, our findings demonstrate that short exposure periods are sufficient to have a negative effect on body image and internalisation of the thin ideal.

One point that may be readily overlooked in developing a theoretical framework around social media’s influence is that the narrative arc of TikTok videos is such that users are exposed to many short stories in quick succession, which may have a different effect to longer form content from a single content creator. As Pierce [ 2 ] notes, the speed of exposure to overlapping, but separate narratives depicted in successive videos, is an important feature of TikTok content and may contribute to the influence of such platforms on disordered eating and body attitudes. Each piece of content serves as a standalone narrative but may also overlap and interact with the viewer’s experience of the next video they watch to build a cumulative, normalised narrative of disordered body- and eating-practices.

In the current study, participants who engaged with TikTok for two-three hours a day were classified as high users, and those who used TikTok for three or more hours were classed as extreme. These rates of usage may, however, be quite normative, with Santarossa and Woodruff [ 73 ] citing three-four hours a day on social media as normative for their sample of young adults, though notably participants in the current study were only questioned about their TikTok usage, not their general use of social media.

While we examined the effect of pro-ana content in this study, that some changes were observed in the control group as well as the experimental group indicates that the social media environment, characterised as it is by idealisation, instant feedback, and readily available social comparison [ 6 ], may play a general role in diminishing positive body image attitudes and healthy aspirations. This is supported by Tiggemann and Slater’s [ 35 , 36 ] research in which social media usage was found to correlate positively with higher levels of body image concerns, in contrast to time spent on the internet more generally, and this may be particularly true for visually oriented platforms that sensitize viewers to their own appearance and that of others. As noted previously, of the visually-oriented social media platforms, predominantly TikTok and Instagram, videos are commonly framed on TikTok so that the subject’s whole body is visible, particularly in dance videos and in #GymTok content, where on Instagram, cameo style head-shot videos appear more likely to feature, which further suggests that TikTok may provide more body-related stimuli than other platforms, even when the intention of the content does not relate to body-image or #fitspiration.

Importantly, the algorithm on TikTok functions in such a way that those who actively seek out body positivity content may also be exposed to nefarious body-related content such as body checking, a competitive, self-surveillance type of content where users are encouraged to test out their weight by attempting to drink from a glass of water while their arm encircles another’s waist. As McGuigan [ 74 ] reports, watching just one body checking video may result in hundreds more filtering through a user’s For You page, with those actively attempting to seek out positive body image content likely to be inadvertently exposed to disordered content due to the configuration of the algorithm. This function of the For You page is demonstrated in the current study, with 64% of participants reporting having seen disordered eating content on their For You page, higher than any other kind of harmful content, including suicide and bullying. The current study did not assess participants’ consumption of #FoodTok, #GymTok, and #Fitspiration. Engagement with these dimensions of TikTok and the type of content that participants seek out via the search function warrant consideration in future research.

The TikTok algorithm underscores Logrieco et al’s. [ 18 ] findings that even anti-anorexia content can be problematic, especially given complexities in determining and controlling what is performatively problematic, including videos discussing recovery and positive body attitudes that may somewhat paradoxically further body policing and competition among users and consumers of social media content. Furthermore, as Logrieco et al. [ 18 ] highlight, TikTok is replete in both pro-ana and much more implicit body-related content that may be harmful to viewers, not to mention those creating the content, whose experiences also warrant consideration.

Theoretical and practical implications

The present study bridged an important gap in the literature by utilising both experimental and cross-sectional designs to examine the influence of pro-ana TikTok content on users’ body image satisfaction, internalisation of body ideals, and disordered eating behaviours. While the negative impact of social media on body image and eating behaviours has been established in relation to platforms such as Instagram and Twitter, TikTok’s rapid emergence and unique algorithm warrant independent analysis.

The present findings have important theoretical implications for the understanding of sociocultural influences of orthorexia nervosa development. Notably, this study is one of the first to highlight the association between orthorexia nervosa and the tripartite model of disordered eating using an experimental design. The results illustrate that the internalisation of sociocultural appearance ideals predicts the development of ‘healthy’ disordered eating, as suggested by the tripartite theory. Western culture ideals do seem to influence the expression of orthorexic tendencies, thus caution should be exercised by women when interacting with appearance-related TikTok content.

Unlike explicit pro-ana content, which is open to condemnation, the moral and health-related discourses underpinning much body-related content in which thinness and health are espoused as goodness, reflects a new trend in diet culture masquerading as wellness culture [ 20 , 21 ]. Questions are raised around the ethics of social media algorithms when the technologically fostered link between recovery-focused content and disordered-content on TikTok is laid bare, particularly considering that extant research has found individuals with experience of eating disorders often seek out support, safety, and connection online [ 49 ] and in doing so on a platform like TikTok, may be exposed to more disordered eating content than the average user. Given visual social media platforms are associated with higher levels of dysfunction in relation to body image [ 4 ], the policy and ethics of such platforms warrant scrutiny from a variety of stakeholders in management, marketing, technology regulation, with psychology playing an important role in the marketing of these platforms. As traditional journalistic platforms have been subjected to scrutiny and reform, so too must a climate of accountability be established within the social media nexus.

The widespread growth of social media may warrant greater concern than traditional forms of mass media, not only because of the full-time accessibility and diverse range of platforms, but also due to the prevalence of peer-to-peer interactions. According to the social comparison theory, comparison of oneself to others has traditionally considered more removed, higher status influences (e.g., celebrities, actors/actresses, supermodels) as a greater source of pressure than those in the individuals’ natural environment (e.g., family and peers). Re-examination of this theoretical perspective is warranted considering the contemporary challenges of social media and the perpetuation of body image messages from alike peers. Furthermore, a diverse range of “content” may trigger disordered body- and eating-related attitudes, including #fitspiration and #GymTok, which poses challenges for social media platforms in regulating content. The inclusion of orthorexia in the milieu highlights the disordered nature of seemingly benign health practices and social media content.

That TikTok content containing explicit and implicit pro-ana themes may readily remain on the app uncensored exemplifies the importance of protective strategies to build resilience at the individual level. One such protective strategy is shifting focus from body appearance to functionality. Alleva and colleagues [ 71 ] investigated the Expand Your Horizon programme, designed to improve body image by training women to focus on body functionality. They report that women who engaged with the Expand Your Horizon programme experienced greater satisfaction with body image and functionality, body appreciation, and reduced self-objectification compared to women who did not engage with the program. Health professionals involved in the care of women with eating disorders and other mental health issues should also be educated to ensure they are knowledgeable about the social media content their clients may be exposed to, equipping them with skills to engage in conversations about the potential detrimental impacts of viewing pro-ana and other harmful TikTok content [ 53 ].

The administration of such programs in schools, universities, community groups, and clinical settings could prove effectual in the prevention of disordered eating and body image disturbance development and may reduce symptom severity of a pre-established disorder. Such programs must be developed with great care, however, given the propensity for even anti-anorexia content to have a negative effect on those consuming it [ 18 ]. The development of self-compassion may also build resilience in women, with research confirming that self-compassion can be effectively taught [ 75 ]. Subsequently, programs have been developed such as Compassion Focused Therapy (CFT) in which clients are trained to develop more compassionate self-talk during negative thought processes and to foster more constructive thought patterns [ 76 ]. The value of CFT has been established in the literature with both clinical and non-clinical samples and has promising outcomes particularly for those high in self-criticism [ 77 ].

Young women should be provided with media literacy tools that can assist in advancing critical evaluations of the online world. Digital manipulation of advertising and celebrity images is well known to many people, however, this awareness may be lacking regarding social media images, as they are generally disseminated within one’s peer network rather than outside of it [ 33 ]. Media literacy interventions may educate women about how social media perpetuates appearance-ideals that are often unrealistic and unattainable [ 53 ]. As an example, Posavac et al. [ 78 ] revealed that a single media literacy intervention resulted in a reduction in women’s social comparison to body ideals portrayed in the media.

Such interventions might be extended to female-identifying TikTok users to educate them on the manipulation of videos to produce idealised portrayals of the self. Media literacy should be commenced from an early age by teaching children, adolescents, and adults to understand the influence of implicit messages conveyed through social media and to create media content that is responsible and psychologically safe for others [ 79 ]. Increased understanding of messages portrayed by social media content may prevent thin-ideal endorsement and internet misuse. Notably, however, the most effective approach would be to address the problem at its source and increase the regulation of social media companies, rather than upskill users in how to respond to harmful online environments, which creates further labour for the individual while allowing organisations to continue to produce harmful but easily monetizable content.

Limitations and future directions

To meet the requirements to run multivariate analyses, the continuous data of body image and internalisation scores were dichotomised using a median split to create ‘low’ and ‘high’ groups for each variable. Although dichotomisation was necessary to perform appropriate analyses and power analyses deemed the sample size as adequate following performance of the median split, dichotomising these variables may have contributed to a loss of statistical power to detect true effects.

Limitations are implicated in the use of the ORTO-15 in the present study. The ORTO-15 does not account for different lifestyle factors that may alter a participants’ response, such as dietary restrictions, food intolerances, or medical dietary guidelines. The discrepancies in literature surrounding the psychometric properties of the ORTO-15 may be attributable to the lack of established diagnostic criteria of orthorexia nervosa, cultural differences in expressions of eating disorders, and difficulty comparing research results in determining orthorexia nervosa diagnoses due to inconsistencies in testing questions and cut-off values [ 61 ]. Due to unacceptable reliability in the present study, a factor analysis was performed which identified a factor relating to health food preoccupation. This identified factor was used as the ORTO-15 measure and data from these 5-items were used in analyses and referred to throughout the present study as ‘healthy’ disordered eating. Using the 5-items related to ‘healthy’ disordered eating rather than the complete 15-item scale may not have accurately assessed participants’ degree of orthorexic tendencies. Despite these limitations, the ORTO-15 is the only accepted measure of orthorexic tendencies available [ 63 ]. Additionally, more limitations would likely have been encountered by using the full 15-item measure lacking reliability, compared to utilising the 5-item factor with acceptable reliability.

Future studies of TikTok and disordered eating behaviour should incorporate a measure of social comparison to verify whether social comparison is the vehicle through which women experience decreased body image satisfaction after viewing TikTok content. Future research should also examine the influence of TikTok content creation on body image, internalisation of thinness, and disordered eating behaviour and explore the association between what individuals consume on TikTok and the social media content that they produce. This research should be conducted using more diverse samples of women, including transgender women, to determine whether the findings of the present study are relevant for this population given the unique challenges regarding body image and societal beauty standards that they may experience.

Longitudinal studies are also warranted to examine the effect of exposure to pro-ana TikTok content over time, and to assess the effects of pro-ana TikTok content on body image satisfaction and eating disorder symptomology over time. Further research on orthorexia nervosa is needed to establish a more reliable measure of orthorexic tendencies and this would enable future investigation of the impact of pro-ana TikTok content on the development of orthorexia nervosa, as well as individual differences as predisposing factors in the development of orthorexic tendencies. Finally, future research should examine the efficacy of media literacy and self-compassion intervention programs as a protective factor specifically in the TikTok context, where disordered eating messages are more explicit in nature than traditional media and other social media platforms.

The findings of the current study support the notion that pro-ana TikTok content decreases body image satisfaction and increases internalisation of societal beauty standards in young women. This research is timely given reliance on social media for social interaction, particularly for young adults. Our findings indicate that female-identifying TikTok users may experience psychological harm even when explicit pro-ana content is not sought out and even when their TikTok use is time-limited in nature. The findings of this study suggest cultural and organisation change is needed. There is a need for more stringent controls and regulations from TikTok in relation to pro-ana content as well as more subtle forms of disordered eating- and body-related content. Prohibiting or restricting access to pro-ana content on TikTok may reduce the development of disordered eating and the longevity and severity of established eating disorder symptomatology among young women in the TikTok community. There are current steps being taken to delete dangerous content, including blocking searches such as “#anorexia”, however, there are various ways users circumvent these controls and further regulation is required. Unless effective controls are implemented within the platform to prevent the circulation of pro-ana content, female-identifying TikTok users may continue to experience immediate detrimental consequences for body image satisfaction, thin-ideal internalisation, and may experience an increased risk of developing disordered eating behaviours.

  • 1. Burger MR. Correlation Between Social Media Use and Eating Disorder Symptoms: A Literature Review. California Polytechnic State University. 2022. https://digitalcommons.calpoly.edu/kinesp/20/ .
  • 2. Pierce S. Alimentary Politics and Algorithms: The Spread of Information about “Healthy” Eating and Diet on TikTok. Washington University. 2022. https://openscholarship.wustl.edu/undergrad_etd/40\ .
  • 3. Asano E. How Much Time Do People Spend on Social Media? 2017 Jan 1 [cited 15 December 2022]. In: Social Media Today [Internet]. Industry Dive 2023. https://www.socialmediatoday.com/marketing/how-much-time-do-people-spend-social-media-infographic .
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 10. Green S. An Experimental Study on the Effects of Pro-Anorexia Content on Eating Disorder Development. M.A. Thesis, Western Kentucky University. 2019. https://digitalcommons.wku.edu/theses/3138 .
  • 13. Shepherd J. 20 Essential TikTok statistics you need to know in 2022. 2022 Oct 1 [cited 1 December 2022]. In: The Social Shephard [Internet]. 2023. https://thesocialshepherd.com/blog/tiktok-statistics .
  • 14. Kemp S. Digital 2020: Global Digital Overview. Datareportal. 2020 Jan 30 [Cited 2022 November 29]. https://datareportal.com/reports/digital-2020-global-digital-overview .
  • 15. TikTok. Privacy Policy. Updated 2023 Aug 4 [cited 5 May 2021]. https://www.tiktok.com/legal/page/row/privacy-policy/en .
  • 17. Muno DA. Orthorexia nervosa as a distinct eating disorder category: Similarities in alexithymia, attachment, perfectionism, body dissatisfaction, & eating attitudes. Psy.D. Dissertation, Alliant International University. 2020. https://www.proquest.com/openview/e8979ba30dd9eeefd80214e6556c1960/1?pq-origsite=gscholar&cbl=18750&diss=y .
  • 23. Cash T. Cognitive-behavioural perspectives on body image. In: Cash T, Pruzinsky T, editors. Body Image: A handbook of theory, research, and clinical practice. New York: Guilford Press; 2002. pp. 38–46.
  • 37. DeMuynck JP. The Femininity Diet: A Rhetorical Analysis of the Discursive Formation of Femininity and Weight Loss in Contemporary Social Media Promotions. M.A. Thesis, Texas State University. 2020. https://digital.library.txst.edu/items/a15097cd-b076-4ccd-8f80-2b5d373550eb .
  • 48. Thompson JK, Heinberg LJ, Altabe M, Tantleff-Dunn S. Exacting beauty: Theory, assessment, and treatment of body image disturbance. American Psychological Association; 1999. https://psycnet.apa.org/record/1999-02140-000 .
  • 64. Pallant J. SPSS survival manual: A step by step guide to data analysis using IBM SPSS. 6 th ed. McGraw-hill education (UK); 2016.
  • 68. Cohen JW. Statistical power analysis for the behavioural sciences. 2 nd ed. Hillsdale NJ: Lawrence Erlbaum Associates; 1988.
  • 70. Gibson KS. Thinspiration and Fatspiration on Body Dissatisfaction: The Roles of Social Comparisons and Anti-Fat Attitudes. M.A. Dissertation, Texas State University. 2021.
  • 72. Hülsing GM. # Triggerwarning: Body Image: A qualitative study on the influences of TikTok consumption on the Body Image of adolescents. BSc Thesis, University of Twente. 2021.
  • 74. McGuigan S. Body checking is dangerous and it’s all over TikTok. 2022 June 14 [cited 1 December 2022]. In Refinery29 [Internet]. 2022. https://www.refinery29.com/en-au/body-checking-tiktok-trend .
  • 76. Gilbert P. Compassion: Conceptualisations, research, and use in psychotherapy. London, UK: Routledge; 2005. https://books.google.com.au/books?id=-I6NAgAAQBAJ&dq=Gilbert,+P.+(2005).+Compassion:+Conceptualisations,+research,+and+use+in+psychotherapy&lr=&source=gbs_navlinks_s .

Raygun becomes viral sensation during breaking performance at 2024 Paris Olympics: Social media reacts

experimental research when to use

Breaking , more commonly known as breakdancing, made its debut as an Olympic sport this week at the 2024 Paris Games , with 17 B-girls and 16 B-boys making their way to France with the hopes of securing a gold medal.

On the first day of competition, viewers from across the world were treated to a different kind of introduction — not to the sport itself, but one of its athletes.

Though she was a long way from winning a gold medal, likely no breaker Friday captured the imagination of the international audience more than Rachael Gunn, an Australian breaker who competes under the name “Raygun.”

REQUIRED READING: Follow USA TODAY's coverage of the 2024 Paris Olympics

Raygun went 0-3 in her head-to-head competitions Friday — falling to Logistx of the United States, Syssy of France and eventual silver medalist Nicka of Lithuania by a combined score of 54-0 — and failed to record a point across those three matches, but for what she lacked in smoothly executed moves, she made up for in the hearts she won over with her demeanor.

Raygun’s short-lived Olympic experience made her a celebrity, one who people became even more enamored with once they learned more about her.

The 36-year-old Gunn, who was one of the oldest qualifiers in the breaking competition, has a PhD in cultural studies and is a college professor at Macquarie University in Sydney. Her research focuses primarily on breaking, street dance and hip-hop culture while her work draws on “cultural theory, dance studies, popular music studies, media, and ethnography.”

“In 2023, many of my students didn’t believe me when I told them I was training to qualify for the Olympics, and were shocked when they checked Google and saw that I qualified,” Gunn said to CNBC earlier this month .

Unlike much of her competition in Paris, Gunn took up break dancing later in life. She didn’t enter her first battle until 2012.

On Friday, a person who began the day as a little-known academic ended it as a viral worldwide sensation.

Here’s a sampling of the reaction to Raygun and her performance:

2024 PARIS OLYMPICS: Meet the members of Team USA competing at the 2024 Paris Olympics

Social media reacts to Raygun’s breaking performance at 2024 Paris Olympics

I could live all my life and never come up with anything as funny as Raygun, the 36-year-old Australian Olympic breakdancer pic.twitter.com/1uPYBxIlh8 — mariah (@mariahkreutter) August 9, 2024
Give Raygun the gold right now #breakdancing pic.twitter.com/bMtAWEh3xo — n★ (@nichstarr) August 9, 2024
my five year old niece after she says “watch this!” : pic.twitter.com/KBAMSkgltj — alex (@alex_abads) August 9, 2024
I'd like to personally thank Raygun for making millions of people worldwide think "huh, maybe I can make the Olympics too" pic.twitter.com/p5QlUbkL2w — Bradford Pearson (@BradfordPearson) August 9, 2024
The Aussie B-Girl Raygun dressed as a school PE teach complete with cap while everyone else is dressed in funky breaking outfits has sent me. It looks like she’s giving her detention for inappropriate dress at school 🤣 #Olympics pic.twitter.com/lWVU3myu6C — Georgie Heath🎙️ (@GeorgieHeath27) August 9, 2024
There has not been an Olympic performance this dominant since Usain Bolt’s 100m sprint at Beijing in 2008. Honestly, the moment Raygun broke out her Kangaroo move this competition was over! Give her the #breakdancing gold 🥇 pic.twitter.com/6q8qAft1BX — Trapper Haskins (@TrapperHaskins) August 9, 2024
my dog on the lawn 30 seconds after i've finished bathing him pic.twitter.com/A5aqxIbV3H — David Mack (@davidmackau) August 9, 2024
My wife at 3AM: I think I heard one of the kids Me: No way, they are asleep *looks at baby monitor* pic.twitter.com/Ubhi6kY4w4 — Wes Blankenship (@Wes_nship) August 9, 2024
me tryna get the duvet off when i’m too hot at night #olympics pic.twitter.com/NM4Fb2MEmX — robyn (@robynjournalist) August 9, 2024
Raygun really hit them with the "Tyrannosaurus." pic.twitter.com/ZGCMjhzth9 — Mike Beauvais (@MikeBeauvais) August 9, 2024
Raygun (AUS) https://t.co/w2lxLRaW2x — Peter Nygaard (@RetepAdam) August 9, 2024

IMAGES

  1. Experimental research

    experimental research when to use

  2. What is Experimental Research & How is it Significant for Your Business

    experimental research when to use

  3. Experimental Study Design: Types, Methods, Advantages

    experimental research when to use

  4. What is Experimental Research & How is it Significant for Your Business

    experimental research when to use

  5. 15 Experimental Design Examples (2024)

    experimental research when to use

  6. Experimental Research Designs: Types, Examples & Advantages (2023)

    experimental research when to use

COMMENTS

  1. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  2. Experimental Design

    When to use Experimental Research Design . Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome. Here are some situations where experimental research design may ...

  3. Experimental Research: What it is + Types of designs

    Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods, for example, are experimental.

  4. 8.1 Experimental design: What is it and when should it be used?

    Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects.

  5. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  6. Experimental research

    10 Experimental research. 10. Experimental research. Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different ...

  7. Experimental Research Designs: Types, Examples & Advantages

    Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research.

  8. Experimental Research Design

    Quasi experiments do not use randomization to assign research subjects to experimental conditions; instead, some other method of assignment is utilized. Often research subjects voluntarily choose to participate or not to participate in the treatment of interest. Thus, the actions and wishes of the research subjects typically affect assignment.

  9. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  10. A Quick Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  11. Guide to experimental research design

    Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause ...

  12. Experimental Research

    Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc. It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable.

  13. Experimental Research: Definition, Types and Examples

    The three main types of experimental research design are: 1. Pre-experimental research. A pre-experimental research study is an observational approach to performing an experiment. It's the most basic style of experimental research. Free experimental research can occur in one of these design structures: One-shot case study research design: In ...

  14. Experimental Research

    Experimental science is the queen of sciences and the goal of all speculation. Roger Bacon (1214-1294) Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term 'experiment' arises from Latin, Experiri, which means, 'to try'.

  15. Study/Experimental/Research Design: Much More Than Statistics

    The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style, 2 so I will use ...

  16. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  17. Experimental Research: Meaning And Examples Of Experimental ...

    Here are some examples of experimental research: This research method can be used to evaluate employees' skills. Organizations ask candidates to take tests before filling a post. It is used to screen qualified candidates from a pool of applicants. This allows organizations to identify skills at the time of employment.

  18. Experimental Research

    In this case, quasi-experimental research involves using intact groups in an experiment, rather than assigning individuals at random to research conditions. (some researchers define this latter situation differently. For our course, we will allow this definition). In causal comparative (ex post facto) research, the groups are already formed. It ...

  19. What is experimental research: Definition, types & examples

    An example of experimental research in marketing: The ideal goal of a marketing product, advertisement, or campaign is to attract attention and create positive emotions in the target audience. Marketers can focus on different elements in different campaigns, change the packaging/outline, and have a different approach.

  20. A Complete Guide to Experimental Research

    Collect the data by using suitable data collection according to your experiment's requirement, such as observations, case studies , surveys , interviews, questionnaires, etc. Analyse the obtained information. Step 8. Present and Conclude the Findings of the Study. Write the report of your research.

  21. Experimental Research Design

    An experimental research design is typically focused on the relationship between two variables: the independent variable and the dependent variable. The researcher uses random sampling and random ...

  22. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  23. The Impact of Unconditional Cash Transfers on Consumption and Household

    We provide new evidence on the causal effect of unearned income on consumption, balance sheets, and financial outcomes by exploiting an experiment that randomly assigned 1000 individuals to receive $1000 per month and 2000 individuals to receive $50 per month for three years. The transfer increased ...

  24. JMSE

    Dong Q, Xu G, Chen W. Experimental Research on the Low-Cycle Fatigue Crack Growth Rate for a Stiffened Plate of EH36 Steel for Use in Ship Structures. Journal of Marine Science and Engineering . 2024; 12(8):1365.

  25. Artificial Intelligence

    Artificial Intelligence. NIH promotes the safe and responsible use of AI in biomedical research through programs that support the development and use of algorithms and models for research, contribute to AI-ready datasets that accelerate discovery, and encourage multi-disciplinary partnerships that drive transparency, privacy, and equity.

  26. Response satisficing and data quality in marketing: measurement and

    6 Because we are interested in the potential for using satisficing as a screener in research and sought to directly compare it to the three sample sources, we wanted to examine groups of participants that would yield three groups, similar to our three sample sources, as shown in Table 3.However, we also performed a Model 3 analysis in PROCESS using the quantitative measure of response satisficing.

  27. Where Rubber Meets the Road: EPA Researchers Study the Environmental

    To address growing concerns of tire pollution and a specific pollutant called 6PPD-quinone (6PPD-Q), EPA researcher Dr. Paul Mayer led an effort to investigate the life cycle of tires and their impacts on the environment.The resulting article, "Where the rubber meets the road: Emerging environmental impacts of tire wear particles and their chemical cocktails," is a holistic examination and ...

  28. Experimental demonstration of magnetic tunnel junction-based

    Based on the experimental results, a suite of models has been developed to characterize the accuracy of CRAM computation. ... Research aiming at connecting logic and memory more closely has led to ...

  29. #ForYou? the impact of pro-ana TikTok content on body image

    Videos glamourising disordered eating practices and body image concerns readily circulate on TikTok. Minimal empirical research has investigated the impact of TikTok content on body image and eating behaviour. The present study aimed to fill this gap in current research by examining the influence of pro-anorexia TikTok content on young women's body image and degree of internalisation of ...

  30. Social media reacts to Raygun's viral breaking performance at 2024

    Breaking, more commonly known as breakdancing, made its debut as an Olympic sport this week at the 2024 Paris Games, with 17 B-girls and 16 B-boys making their way to France with the hopes of ...