• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

experiment is a research method

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Interactive forms

Interactive Forms: Key Features, Benefits, Uses + Design Tips

Sep 4, 2024

closed-loop management

Closed-Loop Management: The Key to Customer Centricity

Sep 3, 2024

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

experiment is a research method

Why You Should Attend XDAY 2024

Aug 30, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Experimental Method Works in Psychology

sturti/Getty Images

The Experimental Process

Types of experiments, potential pitfalls of the experimental method.

The experimental method is a type of research procedure that involves manipulating variables to determine if there is a cause-and-effect relationship. The results obtained through the experimental method are useful but do not prove with 100% certainty that a singular cause always creates a specific effect. Instead, they show the probability that a cause will or will not lead to a particular effect.

At a Glance

While there are many different research techniques available, the experimental method allows researchers to look at cause-and-effect relationships. Using the experimental method, researchers randomly assign participants to a control or experimental group and manipulate levels of an independent variable. If changes in the independent variable lead to changes in the dependent variable, it indicates there is likely a causal relationship between them.

What Is the Experimental Method in Psychology?

The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

For example, researchers may want to learn how different visual patterns may impact our perception. Or they might wonder whether certain actions can improve memory . Experiments are conducted on many behavioral topics, including:

The scientific method forms the basis of the experimental method. This is a process used to determine the relationship between two variables—in this case, to explain human behavior .

Positivism is also important in the experimental method. It refers to factual knowledge that is obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables. Then they formulate a hypothesis, manipulate the variables, and collect data on the results. Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on the experiment outcome.

History of the Experimental Method

The idea of using experiments to better understand human psychology began toward the end of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879.

Wundt is often called the father of experimental psychology. He believed that experiments could help explain how psychology works, and used this approach to study consciousness .

Wundt coined the term "physiological psychology." This is a hybrid of physiology and psychology, or how the body affects the brain.

Other early contributors to the development and evolution of experimental psychology as we know it today include:

  • Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus
  • Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions
  • Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology
  • Georg Elias Müller (1850-1934), who performed an early experiment on attitude which involved the sensory discrimination of weights and revealed how anticipation can affect this discrimination

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable

The dependent variable is the effect that the experimenter is measuring. If a researcher was investigating how sleep influences test scores, for example, the test scores would be the dependent variable.

Independent Variable

The independent variable is the variable that the experimenter manipulates. In the previous example, the amount of sleep an individual gets would be the independent variable.

A hypothesis is a tentative statement or a guess about the possible relationship between two or more variables. In looking at how sleep influences test scores, the researcher might hypothesize that people who get more sleep will perform better on a math test the following day. The purpose of the experiment, then, is to either support or reject this hypothesis.

Operational definitions are necessary when performing an experiment. When we say that something is an independent or dependent variable, we must have a very clear and specific definition of the meaning and scope of that variable.

Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment. Types of extraneous variables include participant variables, situational variables, demand characteristics, and experimenter effects. In some cases, researchers can take steps to control for extraneous variables.

Demand Characteristics

Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in a psychology experiment. This can sometimes cause participants to alter their behavior, which can affect the results of the experiment.

Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables. 

Confounding Variables

Confounding variables are variables that can affect the dependent variable, but that experimenters cannot control for. Confounding variables can make it difficult to determine if the effect was due to changes in the independent variable or if the confounding variable may have played a role.

Psychologists, like other scientists, use the scientific method when conducting an experiment. The scientific method is a set of procedures and principles that guide how scientists develop research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:

  • Identifying a problem to study
  • Devising the research protocol
  • Conducting the experiment
  • Analyzing the data collected
  • Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their academic careers. Learning how to conduct an experiment is important to understanding how psychologists prove and disprove theories in this field.

There are a few different types of experiments that researchers might use when studying psychology. Each has pros and cons depending on the participants being studied, the hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control over the variables. These experiments can also be easier for other researchers to replicate. The drawback of this research type is that what takes place in a lab is not always what takes place in the real world.

Field Experiments

Sometimes researchers opt to conduct their experiments in the field. For example, a social psychologist interested in researching prosocial behavior might have a person pretend to faint and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings. But it is more difficult for researchers to control the many variables existing in these settings that could potentially influence the experiment's results.

Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-experiment. Quasi-experiments are often referred to as natural experiments because the researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to manipulate the independent variable in the situation (personality traits). Participants also cannot be randomly assigned because they naturally fall into pre-existing groups based on their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where scientists are interested in studying phenomena in natural, real-world settings. It's also beneficial if there are limits on research funds or time.

Field experiments can be either quasi-experiments or true experiments.

Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors, Researchers use experiments to study many aspects of psychology.

A 2019 study investigated whether splitting attention between electronic devices and classroom lectures had an effect on college students' learning abilities. It found that dividing attention between these two mediums did not affect lecture comprehension. However, it did impact long-term retention of the lecture information, which affected students' exam performance.

An experiment used participants' eye movements and electroencephalogram (EEG) data to better understand cognitive processing differences between experts and novices. It found that experts had higher power in their theta brain waves than novices, suggesting that they also had a higher cognitive load.

A study looked at whether chatting online with a computer via a chatbot changed the positive effects of emotional disclosure often received when talking with an actual human. It found that the effects were the same in both cases.

One experimental study evaluated whether exercise timing impacts information recall. It found that engaging in exercise prior to performing a memory task helped improve participants' short-term memory abilities.

Sometimes researchers use the experimental method to get a bigger-picture view of psychological behaviors and impacts. For example, one 2018 study examined several lab experiments to learn more about the impact of various environmental factors on building occupant perceptions.

A 2020 study set out to determine the role that sensation-seeking plays in political violence. This research found that sensation-seeking individuals have a higher propensity for engaging in political violence. It also found that providing access to a more peaceful, yet still exciting political group helps reduce this effect.

While the experimental method can be a valuable tool for learning more about psychology and its impacts, it also comes with a few pitfalls.

Experiments may produce artificial results, which are difficult to apply to real-world situations. Similarly, researcher bias can impact the data collected. Results may not be able to be reproduced, meaning the results have low reliability .

Since humans are unpredictable and their behavior can be subjective, it can be hard to measure responses in an experiment. In addition, political pressure may alter the results. The subjects may not be a good representation of the population, or groups used may not be comparable.

And finally, since researchers are human too, results may be degraded due to human error.

What This Means For You

Every psychological research method has its pros and cons. The experimental method can help establish cause and effect, and it's also beneficial when research funds are limited or time is of the essence.

At the same time, it's essential to be aware of this method's pitfalls, such as how biases can affect the results or the potential for low reliability. Keeping these in mind can help you review and assess research studies more accurately, giving you a better idea of whether the results can be trusted or have limitations.

Colorado State University. Experimental and quasi-experimental research .

American Psychological Association. Experimental psychology studies human and animals .

Mayrhofer R, Kuhbandner C, Lindner C. The practice of experimental psychology: An inevitably postmodern endeavor . Front Psychol . 2021;11:612805. doi:10.3389/fpsyg.2020.612805

Mandler G. A History of Modern Experimental Psychology .

Stanford University. Wilhelm Maximilian Wundt . Stanford Encyclopedia of Philosophy.

Britannica. Gustav Fechner .

Britannica. Hermann von Helmholtz .

Meyer A, Hackert B, Weger U. Franz Brentano and the beginning of experimental psychology: implications for the study of psychological phenomena today . Psychol Res . 2018;82:245-254. doi:10.1007/s00426-016-0825-7

Britannica. Georg Elias Müller .

McCambridge J, de Bruin M, Witton J.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: A systematic review .  PLoS ONE . 2012;7(6):e39116. doi:10.1371/journal.pone.0039116

Laboratory experiments . In: The Sage Encyclopedia of Communication Research Methods. Allen M, ed. SAGE Publications, Inc. doi:10.4135/9781483381411.n287

Schweizer M, Braun B, Milstone A. Research methods in healthcare epidemiology and antimicrobial stewardship — quasi-experimental designs . Infect Control Hosp Epidemiol . 2016;37(10):1135-1140. doi:10.1017/ice.2016.117

Glass A, Kang M. Dividing attention in the classroom reduces exam performance . Educ Psychol . 2019;39(3):395-408. doi:10.1080/01443410.2018.1489046

Keskin M, Ooms K, Dogru AO, De Maeyer P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking . ISPRS Int J Geo-Inf . 2020;9(7):429. doi:10.3390.ijgi9070429

Ho A, Hancock J, Miner A. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot . J Commun . 2018;68(4):712-733. doi:10.1093/joc/jqy026

Haynes IV J, Frith E, Sng E, Loprinzi P. Experimental effects of acute exercise on episodic memory function: Considerations for the timing of exercise . Psychol Rep . 2018;122(5):1744-1754. doi:10.1177/0033294118786688

Torresin S, Pernigotto G, Cappelletti F, Gasparella A. Combined effects of environmental factors on human perception and objective performance: A review of experimental laboratory works . Indoor Air . 2018;28(4):525-538. doi:10.1111/ina.12457

Schumpe BM, Belanger JJ, Moyano M, Nisa CF. The role of sensation seeking in political violence: An extension of the significance quest theory . J Personal Social Psychol . 2020;118(4):743-761. doi:10.1037/pspp0000223

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Mixed Research methods

Mixed Methods Research – Types & Analysis

Textual Analysis

Textual Analysis – Types, Examples and Guide

Correlational Research Design

Correlational Research – Methods, Types and...

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 college students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. Table 6.2 “Block Randomization Sequence for Assigning Nine Participants to Three Conditions” shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.2 Block Randomization Sequence for Assigning Nine Participants to Three Conditions

Participant Condition
4 B
5 C
6 A

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008).

Placebo effects are interesting in their own right (see Note 6.28 “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 3 September 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Experimentation in Scientific Research: Variables and controls in practice

by Anthony Carpi, Ph.D., Anne E. Egger, Ph.D.

Listen to this reading

Did you know that experimental design was developed more than a thousand years ago by a Middle Eastern scientist who studied light? All of us use a form of experimental research in our day to day lives when we try to find the spot with the best cell phone reception, try out new cooking recipes, and more. Scientific experiments are built on similar principles.

Experimentation is a research method in which one or more variables are consciously manipulated and the outcome or effect of that manipulation on other variables is observed.

Experimental designs often make use of controls that provide a measure of variability within a system and a check for sources of error.

Experimental methods are commonly applied to determine causal relationships or to quantify the magnitude of response of a variable.

Anyone who has used a cellular phone knows that certain situations require a bit of research: If you suddenly find yourself in an area with poor phone reception, you might move a bit to the left or right, walk a few steps forward or back, or even hold the phone over your head to get a better signal. While the actions of a cell phone user might seem obvious, the person seeking cell phone reception is actually performing a scientific experiment: consciously manipulating one component (the location of the cell phone) and observing the effect of that action on another component (the phone's reception). Scientific experiments are obviously a bit more complicated, and generally involve more rigorous use of controls , but they draw on the same type of reasoning that we use in many everyday situations. In fact, the earliest documented scientific experiments were devised to answer a very common everyday question: how vision works.

  • A brief history of experimental methods

Figure 1: Alhazen (965-ca.1039) as pictured on an Iraqi 10,000-dinar note

Figure 1: Alhazen (965-ca.1039) as pictured on an Iraqi 10,000-dinar note

One of the first ideas regarding how human vision works came from the Greek philosopher Empedocles around 450 BCE . Empedocles reasoned that the Greek goddess Aphrodite had lit a fire in the human eye, and vision was possible because light rays from this fire emanated from the eye, illuminating objects around us. While a number of people challenged this proposal, the idea that light radiated from the human eye proved surprisingly persistent until around 1,000 CE , when a Middle Eastern scientist advanced our knowledge of the nature of light and, in so doing, developed a new and more rigorous approach to scientific research . Abū 'Alī al-Hasan ibn al-Hasan ibn al-Haytham, also known as Alhazen , was born in 965 CE in the Arabian city of Basra in what is present-day Iraq. He began his scientific studies in physics, mathematics, and other sciences after reading the works of several Greek philosophers. One of Alhazen's most significant contributions was a seven-volume work on optics titled Kitab al-Manazir (later translated to Latin as Opticae Thesaurus Alhazeni – Alhazen's Book of Optics ). Beyond the contributions this book made to the field of optics, it was a remarkable work in that it based conclusions on experimental evidence rather than abstract reasoning – the first major publication to do so. Alhazen's contributions have proved so significant that his likeness was immortalized on the 2003 10,000-dinar note issued by Iraq (Figure 1).

Alhazen invested significant time studying light , color, shadows, rainbows, and other optical phenomena. Among this work was a study in which he stood in a darkened room with a small hole in one wall. Outside of the room, he hung two lanterns at different heights. Alhazen observed that the light from each lantern illuminated a different spot in the room, and each lighted spot formed a direct line with the hole and one of the lanterns outside the room. He also found that covering a lantern caused the spot it illuminated to darken, and exposing the lantern caused the spot to reappear. Thus, Alhazen provided some of the first experimental evidence that light does not emanate from the human eye but rather is emitted by certain objects (like lanterns) and travels from these objects in straight lines. Alhazen's experiment may seem simplistic today, but his methodology was groundbreaking: He developed a hypothesis based on observations of physical relationships (that light comes from objects), and then designed an experiment to test that hypothesis. Despite the simplicity of the method , Alhazen's experiment was a critical step in refuting the long-standing theory that light emanated from the human eye, and it was a major event in the development of modern scientific research methodology.

Comprehension Checkpoint

  • Experimentation as a scientific research method

Experimentation is one scientific research method , perhaps the most recognizable, in a spectrum of methods that also includes description, comparison, and modeling (see our Description , Comparison , and Modeling modules). While all of these methods share in common a scientific approach, experimentation is unique in that it involves the conscious manipulation of certain aspects of a real system and the observation of the effects of that manipulation. You could solve a cell phone reception problem by walking around a neighborhood until you see a cell phone tower, observing other cell phone users to see where those people who get the best reception are standing, or looking on the web for a map of cell phone signal coverage. All of these methods could also provide answers, but by moving around and testing reception yourself, you are experimenting.

  • Variables: Independent and dependent

In the experimental method , a condition or a parameter , generally referred to as a variable , is consciously manipulated (often referred to as a treatment) and the outcome or effect of that manipulation is observed on other variables. Variables are given different names depending on whether they are the ones manipulated or the ones observed:

  • Independent variable refers to a condition within an experiment that is manipulated by the scientist.
  • Dependent variable refers to an event or outcome of an experiment that might be affected by the manipulation of the independent variable .

Scientific experimentation helps to determine the nature of the relationship between independent and dependent variables . While it is often difficult, or sometimes impossible, to manipulate a single variable in an experiment , scientists often work to minimize the number of variables being manipulated. For example, as we move from one location to another to get better cell reception, we likely change the orientation of our body, perhaps from south-facing to east-facing, or we hold the cell phone at a different angle. Which variable affected reception: location, orientation, or angle of the phone? It is critical that scientists understand which aspects of their experiment they are manipulating so that they can accurately determine the impacts of that manipulation . In order to constrain the possible outcomes of an experimental procedure, most scientific experiments use a system of controls .

  • Controls: Negative, positive, and placebos

In a controlled study, a scientist essentially runs two (or more) parallel and simultaneous experiments: a treatment group, in which the effect of an experimental manipulation is observed on a dependent variable , and a control group, which uses all of the same conditions as the first with the exception of the actual treatment. Controls can fall into one of two groups: negative controls and positive controls .

In a negative control , the control group is exposed to all of the experimental conditions except for the actual treatment . The need to match all experimental conditions exactly is so great that, for example, in a trial for a new drug, the negative control group will be given a pill or liquid that looks exactly like the drug, except that it will not contain the drug itself, a control often referred to as a placebo . Negative controls allow scientists to measure the natural variability of the dependent variable(s), provide a means of measuring error in the experiment , and also provide a baseline to measure against the experimental treatment.

Some experimental designs also make use of positive controls . A positive control is run as a parallel experiment and generally involves the use of an alternative treatment that the researcher knows will have an effect on the dependent variable . For example, when testing the effectiveness of a new drug for pain relief, a scientist might administer treatment placebo to one group of patients as a negative control , and a known treatment like aspirin to a separate group of individuals as a positive control since the pain-relieving aspects of aspirin are well documented. In both cases, the controls allow scientists to quantify background variability and reject alternative hypotheses that might otherwise explain the effect of the treatment on the dependent variable .

  • Experimentation in practice: The case of Louis Pasteur

Well-controlled experiments generally provide strong evidence of causality, demonstrating whether the manipulation of one variable causes a response in another variable. For example, as early as the 6th century BCE , Anaximander , a Greek philosopher, speculated that life could be formed from a mixture of sea water, mud, and sunlight. The idea probably stemmed from the observation of worms, mosquitoes, and other insects "magically" appearing in mudflats and other shallow areas. While the suggestion was challenged on a number of occasions, the idea that living microorganisms could be spontaneously generated from air persisted until the middle of the 18 th century.

In the 1750s, John Needham, a Scottish clergyman and naturalist, claimed to have proved that spontaneous generation does occur when he showed that microorganisms flourished in certain foods such as soup broth, even after they had been briefly boiled and covered. Several years later, the Italian abbot and biologist Lazzaro Spallanzani , boiled soup broth for over an hour and then placed bowls of this soup in different conditions, sealing some and leaving others exposed to air. Spallanzani found that microorganisms grew in the soup exposed to air but were absent from the sealed soup. He therefore challenged Needham's conclusions and hypothesized that microorganisms suspended in air settled onto the exposed soup but not the sealed soup, and rejected the idea of spontaneous generation .

Needham countered, arguing that the growth of bacteria in the soup was not due to microbes settling onto the soup from the air, but rather because spontaneous generation required contact with an intangible "life force" in the air itself. He proposed that Spallanzani's extensive boiling destroyed the "life force" present in the soup, preventing spontaneous generation in the sealed bowls but allowing air to replenish the life force in the open bowls. For several decades, scientists continued to debate the spontaneous generation theory of life, with support for the theory coming from several notable scientists including Félix Pouchet and Henry Bastion. Pouchet, Director of the Rouen Museum of Natural History in France, and Bastion, a well-known British bacteriologist, argued that living organisms could spontaneously arise from chemical processes such as fermentation and putrefaction. The debate became so heated that in 1860, the French Academy of Sciences established the Alhumbert prize of 2,500 francs to the first person who could conclusively resolve the conflict. In 1864, Louis Pasteur achieved that result with a series of well-controlled experiments and in doing so claimed the Alhumbert prize.

Pasteur prepared for his experiments by studying the work of others that came before him. In fact, in April 1861 Pasteur wrote to Pouchet to obtain a research description that Pouchet had published. In this letter, Pasteur writes:

Paris, April 3, 1861 Dear Colleague, The difference of our opinions on the famous question of spontaneous generation does not prevent me from esteeming highly your labor and praiseworthy efforts... The sincerity of these sentiments...permits me to have recourse to your obligingness in full confidence. I read with great care everything that you write on the subject that occupies both of us. Now, I cannot obtain a brochure that I understand you have just published.... I would be happy to have a copy of it because I am at present editing the totality of my observations, where naturally I criticize your assertions. L. Pasteur (Porter, 1961)

Pasteur received the brochure from Pouchet several days later and went on to conduct his own experiments . In these, he repeated Spallanzani's method of boiling soup broth, but he divided the broth into portions and exposed these portions to different controlled conditions. Some broth was placed in flasks that had straight necks that were open to the air, some broth was placed in sealed flasks that were not open to the air, and some broth was placed into a specially designed set of swan-necked flasks, in which the broth would be open to the air but the air would have to travel a curved path before reaching the broth, thus preventing anything that might be present in the air from simply settling onto the soup (Figure 2). Pasteur then observed the response of the dependent variable (the growth of microorganisms) in response to the independent variable (the design of the flask). Pasteur's experiments contained both positive controls (samples in the straight-necked flasks that he knew would become contaminated with microorganisms) and negative controls (samples in the sealed flasks that he knew would remain sterile). If spontaneous generation did indeed occur upon exposure to air, Pasteur hypothesized, microorganisms would be found in both the swan-neck flasks and the straight-necked flasks, but not in the sealed flasks. Instead, Pasteur found that microorganisms appeared in the straight-necked flasks, but not in the sealed flasks or the swan-necked flasks.

Figure 2: Pasteur's drawings of the flasks he used (Pasteur, 1861). Fig. 25 D, C, and B (top) show various sealed flasks (negative controls); Fig. 26 (bottom right) illustrates a straight-necked flask directly open to the atmosphere (positive control); and Fig. 25 A (bottom left) illustrates the specially designed swan-necked flask (treatment group).

Figure 2: Pasteur's drawings of the flasks he used (Pasteur, 1861). Fig. 25 D, C, and B (top) show various sealed flasks (negative controls); Fig. 26 (bottom right) illustrates a straight-necked flask directly open to the atmosphere (positive control); and Fig. 25 A (bottom left) illustrates the specially designed swan-necked flask (treatment group).

By using controls and replicating his experiment (he used more than one of each type of flask), Pasteur was able to answer many of the questions that still surrounded the issue of spontaneous generation. Pasteur said of his experimental design, "I affirm with the most perfect sincerity that I have never had a single experiment, arranged as I have just explained, which gave me a doubtful result" (Porter, 1961). Pasteur's work helped refute the theory of spontaneous generation – his experiments showed that air alone was not the cause of bacterial growth in the flask, and his research supported the hypothesis that live microorganisms suspended in air could settle onto the broth in open-necked flasks via gravity .

  • Experimentation across disciplines

Experiments are used across all scientific disciplines to investigate a multitude of questions. In some cases, scientific experiments are used for exploratory purposes in which the scientist does not know what the dependent variable is. In this type of experiment, the scientist will manipulate an independent variable and observe what the effect of the manipulation is in order to identify a dependent variable (or variables). Exploratory experiments are sometimes used in nutritional biology when scientists probe the function and purpose of dietary nutrients . In one approach, a scientist will expose one group of animals to a normal diet, and a second group to a similar diet except that it is lacking a specific vitamin or nutrient. The researcher will then observe the two groups to see what specific physiological changes or medical problems arise in the group lacking the nutrient being studied.

Scientific experiments are also commonly used to quantify the magnitude of a relationship between two or more variables . For example, in the fields of pharmacology and toxicology, scientific experiments are used to determine the dose-response relationship of a new drug or chemical. In these approaches, researchers perform a series of experiments in which a population of organisms , such as laboratory mice, is separated into groups and each group is exposed to a different amount of the drug or chemical of interest. The analysis of the data that result from these experiments (see our Data Analysis and Interpretation module) involves comparing the degree of the organism's response to the dose of the substance administered.

In this context, experiments can provide additional evidence to complement other research methods . For example, in the 1950s a great debate ensued over whether or not the chemicals in cigarette smoke cause cancer. Several researchers had conducted comparative studies (see our Comparison in Scientific Research module) that indicated that patients who smoked had a higher probability of developing lung cancer when compared to nonsmokers. Comparative studies differ slightly from experimental methods in that you do not consciously manipulate a variable ; rather you observe differences between two or more groups depending on whether or not they fall into a treatment or control group. Cigarette companies and lobbyists criticized these studies, suggesting that the relationship between smoking and lung cancer was coincidental. Several researchers noted the need for a clear dose-response study; however, the difficulties in getting cigarette smoke into the lungs of laboratory animals prevented this research. In the mid-1950s, Ernest Wynder and colleagues had an ingenious idea: They condensed the chemicals from cigarette smoke into a liquid and applied this in various doses to the skin of groups of mice. The researchers published data from a dose-response experiment of the effect of tobacco smoke condensate on mice (Wynder et al., 1957).

As seen in Figure 3, the researchers found a positive relationship between the amount of condensate applied to the skin of mice and the number of cancers that developed. The graph shows the results of a study in which different groups of mice were exposed to increasing amounts of cigarette tar. The black dots indicate the percentage of each sample group of mice that developed cancer for a given amount cigarette smoke "condensate" applied to their skin. The vertical lines are error bars, showing the amount of uncertainty . The graph shows generally increasing cancer rates with greater exposure. This study was one of the first pieces of experimental evidence in the cigarette smoking debate , and it helped strengthen the case for cigarette smoke as the causative agent in lung cancer in smokers.

Figure 3: Percentage of mice with cancer versus the amount cigarette smoke

Figure 3: Percentage of mice with cancer versus the amount cigarette smoke "condensate" applied to their skin (source: Wynder et al., 1957).

Sometimes experimental approaches and other research methods are not clearly distinct, or scientists may even use multiple research approaches in combination. For example, at 1:52 a.m. EDT on July 4, 2005, scientists with the National Aeronautics and Space Administration (NASA) conducted a study in which a 370 kg spacecraft named Deep Impact was purposely slammed into passing comet Tempel 1. A nearby spacecraft observed the impact and radioed data back to Earth. The research was partially descriptive in that it documented the chemical composition of the comet, but it was also partly experimental in that the effect of slamming the Deep Impact probe into the comet on the volatilization of previously undetected compounds , such as water, was assessed (A'Hearn et al., 2005). It is particularly common that experimentation and description overlap: Another example is Jane Goodall 's research on the behavior of chimpanzees, which can be read in our Description in Scientific Research module.

  • Limitations of experimental methods

experiment is a research method

Figure 4: An image of comet Tempel 1 67 seconds after collision with the Deep Impact impactor. Image credit: NASA/JPL-Caltech/UMD http://deepimpact.umd.edu/gallery/HRI_937_1.html

While scientific experiments provide invaluable data regarding causal relationships, they do have limitations. One criticism of experiments is that they do not necessarily represent real-world situations. In order to clearly identify the relationship between an independent variable and a dependent variable , experiments are designed so that many other contributing variables are fixed or eliminated. For example, in an experiment designed to quantify the effect of vitamin A dose on the metabolism of beta-carotene in humans, Shawna Lemke and colleagues had to precisely control the diet of their human volunteers (Lemke, Dueker et al. 2003). They asked their participants to limit their intake of foods rich in vitamin A and further asked that they maintain a precise log of all foods eaten for 1 week prior to their study. At the time of their study, they controlled their participants' diet by feeding them all the same meals, described in the methods section of their research article in this way:

Meals were controlled for time and content on the dose administration day. Lunch was served at 5.5 h postdosing and consisted of a frozen dinner (Enchiladas, Amy's Kitchen, Petaluma, CA), a blueberry bagel with jelly, 1 apple and 1 banana, and a large chocolate chunk cookie (Pepperidge Farm). Dinner was served 10.5 h post dose and consisted of a frozen dinner (Chinese Stir Fry, Amy's Kitchen) plus the bagel and fruit taken for lunch.

While this is an important aspect of making an experiment manageable and informative, it is often not representative of the real world, in which many variables may change at once, including the foods you eat. Still, experimental research is an excellent way of determining relationships between variables that can be later validated in real world settings through descriptive or comparative studies.

Design is critical to the success or failure of an experiment . Slight variations in the experimental set-up could strongly affect the outcome being measured. For example, during the 1950s, a number of experiments were conducted to evaluate the toxicity in mammals of the metal molybdenum, using rats as experimental subjects . Unexpectedly, these experiments seemed to indicate that the type of cage the rats were housed in affected the toxicity of molybdenum. In response, G. Brinkman and Russell Miller set up an experiment to investigate this observation (Brinkman & Miller, 1961). Brinkman and Miller fed two groups of rats a normal diet that was supplemented with 200 parts per million (ppm) of molybdenum. One group of rats was housed in galvanized steel (steel coated with zinc to reduce corrosion) cages and the second group was housed in stainless steel cages. Rats housed in the galvanized steel cages suffered more from molybdenum toxicity than the other group: They had higher concentrations of molybdenum in their livers and lower blood hemoglobin levels. It was then shown that when the rats chewed on their cages, those housed in the galvanized metal cages absorbed zinc plated onto the metal bars, and zinc is now known to affect the toxicity of molybdenum. In order to control for zinc exposure, then, stainless steel cages needed to be used for all rats.

Scientists also have an obligation to adhere to ethical limits in designing and conducting experiments . During World War II, doctors working in Nazi Germany conducted many heinous experiments using human subjects . Among them was an experiment meant to identify effective treatments for hypothermia in humans, in which concentration camp prisoners were forced to sit in ice water or left naked outdoors in freezing temperatures and then re-warmed by various means. Many of the exposed victims froze to death or suffered permanent injuries. As a result of the Nazi experiments and other unethical research , strict scientific ethical standards have been adopted by the United States and other governments, and by the scientific community at large. Among other things, ethical standards (see our Scientific Ethics module) require that the benefits of research outweigh the risks to human subjects, and those who participate do so voluntarily and only after they have been made fully aware of all the risks posed by the research. These guidelines have far-reaching effects: While the clearest indication of causation in the cigarette smoke and lung cancer debate would have been to design an experiment in which one group of people was asked to take up smoking and another group was asked to refrain from smoking, it would be highly unethical for a scientist to purposefully expose a group of healthy people to a suspected cancer causing agent. As an alternative, comparative studies (see our Comparison in Scientific Research module) were initiated in humans, and experimental studies focused on animal subjects. The combination of these and other studies provided even stronger evidence of the link between smoking and lung cancer than either one method alone would have.

  • Experimentation in modern practice

Like all scientific research , the results of experiments are shared with the scientific community, are built upon, and inspire additional experiments and research. For example, once Alhazen established that light given off by objects enters the human eye, the natural question that was asked was "What is the nature of light that enters the human eye?" Two common theories about the nature of light were debated for many years. Sir Isaac Newton was among the principal proponents of a theory suggesting that light was made of small particles . The English naturalist Robert Hooke (who held the interesting title of Curator of Experiments at the Royal Society of London) supported a different theory stating that light was a type of wave, like sound waves . In 1801, Thomas Young conducted a now classic scientific experiment that helped resolve this controversy . Young, like Alhazen, worked in a darkened room and allowed light to enter only through a small hole in a window shade (Figure 5). Young refocused the beam of light with mirrors and split the beam with a paper-thin card. The split light beams were then projected onto a screen, and formed an alternating light and dark banding pattern – that was a sign that light was indeed a wave (see our Light I: Particle or Wave? module).

Figure 5: Young's split-light beam experiment helped clarify the wave nature of light.

Figure 5: Young's split-light beam experiment helped clarify the wave nature of light.

Approximately 100 years later, in 1905, new experiments led Albert Einstein to conclude that light exhibits properties of both waves and particles . Einstein's dual wave-particle theory is now generally accepted by scientists.

Experiments continue to help refine our understanding of light even today. In addition to his wave-particle theory , Einstein also proposed that the speed of light was unchanging and absolute. Yet in 1998 a group of scientists led by Lene Hau showed that light could be slowed from its normal speed of 3 x 10 8 meters per second to a mere 17 meters per second with a special experimental apparatus (Hau et al., 1999). The series of experiments that began with Alhazen 's work 1000 years ago has led to a progressively deeper understanding of the nature of light. Although the tools with which scientists conduct experiments may have become more complex, the principles behind controlled experiments are remarkably similar to those used by Pasteur and Alhazen hundreds of years ago.

Table of Contents

Activate glossary term highlighting to easily identify key terms within the module. Once highlighted, you can click on these terms to view their definitions.

Activate NGSS annotations to easily identify NGSS standards within the module. Once highlighted, you can click on them to view these standards.

Experimental Research

  • First Online: 25 February 2021

Cite this chapter

experiment is a research method

  • C. George Thomas 2  

4915 Accesses

Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term ‘experiment’ arises from Latin, Experiri, which means, ‘to try’. The knowledge accrues from experiments differs from other types of knowledge in that it is always shaped upon observation or experience. In other words, experiments generate empirical knowledge. In fact, the emphasis on experimentation in the sixteenth and seventeenth centuries for establishing causal relationships for various phenomena happening in nature heralded the resurgence of modern science from its roots in ancient philosophy spearheaded by great Greek philosophers such as Aristotle.

The strongest arguments prove nothing so long as the conclusions are not verified by experience. Experimental science is the queen of sciences and the goal of all speculation . Roger Bacon (1214–1294)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Best, J.W. and Kahn, J.V. 1993. Research in Education (7th Ed., Indian Reprint, 2004). Prentice–Hall of India, New Delhi, 435p.

Google Scholar  

Campbell, D. and Stanley, J. 1963. Experimental and quasi-experimental designs for research. In: Gage, N.L., Handbook of Research on Teaching. Rand McNally, Chicago, pp. 171–247.

Chandel, S.R.S. 1991. A Handbook of Agricultural Statistics. Achal Prakashan Mandir, Kanpur, 560p.

Cox, D.R. 1958. Planning of Experiments. John Wiley & Sons, New York, 308p.

Fathalla, M.F. and Fathalla, M.M.F. 2004. A Practical Guide for Health Researchers. WHO Regional Publications Eastern Mediterranean Series 30. World Health Organization Regional Office for the Eastern Mediterranean, Cairo, 232p.

Fowkes, F.G.R., and Fulton, P.M. 1991. Critical appraisal of published research: Introductory guidelines. Br. Med. J. 302: 1136–1140.

Gall, M.D., Borg, W.R., and Gall, J.P. 1996. Education Research: An Introduction (6th Ed.). Longman, New York, 788p.

Gomez, K.A. 1972. Techniques for Field Experiments with Rice. International Rice Research Institute, Manila, Philippines, 46p.

Gomez, K.A. and Gomez, A.A. 1984. Statistical Procedures for Agricultural Research (2nd Ed.). John Wiley & Sons, New York, 680p.

Hill, A.B. 1971. Principles of Medical Statistics (9th Ed.). Oxford University Press, New York, 390p.

Holmes, D., Moody, P., and Dine, D. 2010. Research Methods for the Bioscience (2nd Ed.). Oxford University Press, Oxford, 457p.

Kerlinger, F.N. 1986. Foundations of Behavioural Research (3rd Ed.). Holt, Rinehart and Winston, USA. 667p.

Kirk, R.E. 2012. Experimental Design: Procedures for the Behavioural Sciences (4th Ed.). Sage Publications, 1072p.

Kothari, C.R. 2004. Research Methodology: Methods and Techniques (2nd Ed.). New Age International, New Delhi, 401p.

Kumar, R. 2011. Research Methodology: A Step-by step Guide for Beginners (3rd Ed.). Sage Publications India, New Delhi, 415p.

Leedy, P.D. and Ormrod, J.L. 2010. Practical Research: Planning and Design (9th Ed.), Pearson Education, New Jersey, 360p.

Marder, M.P. 2011. Research Methods for Science. Cambridge University Press, 227p.

Panse, V.G. and Sukhatme, P.V. 1985. Statistical Methods for Agricultural Workers (4th Ed., revised: Sukhatme, P.V. and Amble, V. N.). ICAR, New Delhi, 359p.

Ross, S.M. and Morrison, G.R. 2004. Experimental research methods. In: Jonassen, D.H. (ed.), Handbook of Research for Educational Communications and Technology (2nd Ed.). Lawrence Erlbaum Associates, New Jersey, pp. 10211043.

Snedecor, G.W. and Cochran, W.G. 1980. Statistical Methods (7th Ed.). Iowa State University Press, Ames, Iowa, 507p.

Download references

Author information

Authors and affiliations.

Kerala Agricultural University, Thrissur, Kerala, India

C. George Thomas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to C. George Thomas .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s)

About this chapter

Thomas, C.G. (2021). Experimental Research. In: Research Methodology and Scientific Writing . Springer, Cham. https://doi.org/10.1007/978-3-030-64865-7_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-64865-7_5

Published : 25 February 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-64864-0

Online ISBN : 978-3-030-64865-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

experiment is a research method

Experimental Research

Experimental Research

Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc.

This article is a part of the guide:

  • Pretest-Posttest
  • Third Variable
  • Research Bias
  • Independent Variable
  • Between Subjects

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable.

The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.

Experimental Research is often used where:

  • There is time priority in a causal relationship ( cause precedes effect )
  • There is consistency in a causal relationship (a cause will always lead to the same effect)
  • The magnitude of the correlation is great.

(Reference: en.wikipedia.org)

The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment .

This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group , the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure.

A very wide definition of experimental research, or a quasi experiment , is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition.

A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.

experiment is a research method

Aims of Experimental Research

Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation . Experimental research is important to society - it helps us to improve our everyday lives.

experiment is a research method

Identifying the Research Problem

After deciding the topic of interest, the researcher tries to define the research problem . This helps the researcher to focus on a more narrow research area to be able to study it appropriately.  Defining the research problem helps you to formulate a  research hypothesis , which is tested against the  null hypothesis .

The research problem is often operationalizationed , to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study.

An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.

Constructing the Experiment

There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

Sampling Groups to Study

Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group , whilst others are tested under the experimental conditions.

Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization , "quasi-randomization" and pairing.

Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors .

Here are some common sampling techniques :

  • probability sampling
  • non-probability sampling
  • simple random sampling
  • convenience sampling
  • stratified sampling
  • systematic sampling
  • cluster sampling
  • sequential sampling
  • disproportional sampling
  • judgmental sampling
  • snowball sampling
  • quota sampling

Creating the Design

The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results.

Typical Designs and Features in Experimental Design

  • Pretest-Posttest Design Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect.
  • Control Group Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect . A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.
  • Randomized Controlled Trials Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables
  • Solomon Four-Group Design With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.
  • Between Subjects Design Grouping Participants to Different Conditions
  • Within Subject Design Participants Take Part in the Different Conditions - See also: Repeated Measures Design
  • Counterbalanced Measures Design Testing the effect of the order of treatments when no control group is available/ethical
  • Matched Subjects Design Matching Participants to Create Similar Experimental- and Control-Groups
  • Double-Blind Experiment Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this.
  • Bayesian Probability Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

Pilot Study

It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right.

Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment.

If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s) . Those two different pilots are likely to give the researcher good information about any problems in the experiment.

Conducting the Experiment

An experiment is typically carried out by manipulating a variable, called the independent variable , affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s) , is measured.

Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables , if possible, or randomizing variables to minimize effects that can be traced back to third variables . Researchers only want to measure the effect of the independent variable(s) when conducting an experiment , allowing them to conclude that this was the reason for the effect.

Analysis and Conclusions

In quantitative research , the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect.

The aim of an analysis is to draw a conclusion , together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results.

If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation .

Experiments are more often of quantitative nature than qualitative nature, although it happens.

Examples of Experiments

This website contains many examples of experiments. Some are not true experiments , but involve some kind of manipulation to investigate a phenomenon. Others fulfill most or all criteria of true experiments.

Here are some examples of scientific experiments:

Social Psychology

  • Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous?
  • Asch Experiment - Will people conform to group behavior?
  • Stanford Prison Experiment - How do people react to roles? Will you behave differently?
  • Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior
  • Law Of Segregation - The Mendel Pea Plant Experiment
  • Transforming Principle - Griffith's Experiment about Genetics
  • Ben Franklin Kite Experiment - Struck by Lightning
  • J J Thomson Cathode Ray Experiment
  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Oskar Blakstad (Jul 10, 2008). Experimental Research. Retrieved Sep 03, 2024 from Explorable.com: https://explorable.com/experimental-research

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

experiment is a research method

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

experiment is a research method

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

5.1 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.

What Is an Experiment?

As we saw earlier in the book, an  experiment  is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. In other words, whether changes in an independent variable  cause  a change in a dependent variable. Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions . For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. For a new researcher, it is easy to confuse  these terms by believing there are three independent variables in this situation: one, two, or five students involved in the discussion, but there is actually only one independent variable (number of witnesses) with three different levels or conditions (one, two or five students). The second fundamental feature of an experiment is that the researcher controls, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables . Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words  manipulation  and  control  have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate  the independent variable by systematically changing its levels and control  other variables by holding them constant.

Manipulation of the Independent Variable

Again, to  manipulate  an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. As discussed earlier in this chapter, the different levels of the independent variable are referred to as  conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore has not conducted an experiment. This distinction  is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating potential alternative explanations for the results.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to conduct an experiment on the effect of early illness experiences on the development of hypochondriasis. This caveat does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this type of methodology in detail later in the book.

Independent variables can be manipulated to create two conditions and experiments involving a single independent variable with two conditions is often referred to as a  single factor two-level design.  However, sometimes greater insights can be gained by adding more conditions to an experiment. When an experiment has one independent variable that is manipulated to produce more than two conditions it is referred to as a single factor multi level design.  So rather than comparing a condition in which there was one witness to a condition in which there were five witnesses (which would represent a single-factor two-level design), Darley and Latané’s used a single factor multi-level design, by manipulating the independent variable to produce three conditions (a one witness, a two witnesses, and a five witnesses condition).

Control of Extraneous Variables

As we have seen previously in the chapter, an  extraneous variable  is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their gender. They would also include situational or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This influencing factor can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to  control  extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of  Table 5.1 show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of  Table 5.1 . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective recall strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in  Table 5.1 , which makes the effect of the independent variable easier to detect (although real data never look quite  that  good).

4 3 3 1
4 3 6 3
4 3 2 4
4 3 4 0
4 3 5 5
4 3 2 7
4 3 3 2
4 3 1 5
4 3 6 1
4 3 8 2
 = 4  = 3  = 4  = 3

One way to control extraneous variables is to hold them constant. This technique can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres. Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, heterosexual, female, right-handed psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger heterosexual women would apply to older homosexual men. In many situations, the advantages of a diverse sample (increased external validity) outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable  is an extraneous variable that differs on average across  levels of the independent variable (i.e., it is an extraneous variable that varies systematically with the independent variable). For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs in each condition so that the average IQ is roughly equal across the conditions, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants in one condition to have substantially lower IQs on average and participants in another condition to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse , and this effect is exactly why confounding variables are undesirable. Because they differ systematically across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable.  Figure 5.1  shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 6.1 Hypothetical Results From a Study on the Effect of Mood on Memory. Because IQ also differs across conditions, it is a confounding variable.

Figure 5.1 Hypothetical Results From a Study on the Effect of Mood on Memory. Because IQ also differs across conditions, it is a confounding variable.

Key Takeaways

  • An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables.
  • An extraneous variable is any variable other than the independent and dependent variables. A confound is an extraneous variable that varies systematically with the independent variable.
  • Practice: List five variables that can be manipulated by the researcher in an experiment. List five variables that cannot be manipulated by the researcher in an experiment.
  • Effect of parietal lobe damage on people’s ability to do basic arithmetic.
  • Effect of being clinically depressed on the number of close friendships people have.
  • Effect of group training on the social skills of teenagers with Asperger’s syndrome.
  • Effect of paying people to take an IQ test on their performance on that test.

Creative Commons License

Share This Book

  • Increase Font Size

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experiment is a research method

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted - Enago

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

experiment is a research method

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experiment is a research method

In your opinion, what is the most effective way to improve integrity in the peer review process?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Controlled Experiment? | Definitions & Examples

What Is a Controlled Experiment? | Definitions & Examples

Published on April 19, 2021 by Pritha Bhandari . Revised on June 22, 2023.

In experiments , researchers manipulate independent variables to test their effects on dependent variables. In a controlled experiment , all variables other than the independent variable are controlled or held constant so they don’t influence the dependent variable.

Controlling variables can involve:

  • holding variables at a constant or restricted level (e.g., keeping room temperature fixed).
  • measuring variables to statistically control for them in your analyses.
  • balancing variables across your experiment through randomization (e.g., using a random order of tasks).

Table of contents

Why does control matter in experiments, methods of control, problems with controlled experiments, other interesting articles, frequently asked questions about controlled experiments.

Control in experiments is critical for internal validity , which allows you to establish a cause-and-effect relationship between variables. Strong validity also helps you avoid research biases , particularly ones related to issues with generalizability (like sampling bias and selection bias .)

  • Your independent variable is the color used in advertising.
  • Your dependent variable is the price that participants are willing to pay for a standard fast food meal.

Extraneous variables are factors that you’re not interested in studying, but that can still influence the dependent variable. For strong internal validity, you need to remove their effects from your experiment.

  • Design and description of the meal,
  • Study environment (e.g., temperature or lighting),
  • Participant’s frequency of buying fast food,
  • Participant’s familiarity with the specific fast food brand,
  • Participant’s socioeconomic status.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experiment is a research method

You can control some variables by standardizing your data collection procedures. All participants should be tested in the same environment with identical materials. Only the independent variable (e.g., ad color) should be systematically changed between groups.

Other extraneous variables can be controlled through your sampling procedures . Ideally, you’ll select a sample that’s representative of your target population by using relevant inclusion and exclusion criteria (e.g., including participants from a specific income bracket, and not including participants with color blindness).

By measuring extraneous participant variables (e.g., age or gender) that may affect your experimental results, you can also include them in later analyses.

After gathering your participants, you’ll need to place them into groups to test different independent variable treatments. The types of groups and method of assigning participants to groups will help you implement control in your experiment.

Control groups

Controlled experiments require control groups . Control groups allow you to test a comparable treatment, no treatment, or a fake treatment (e.g., a placebo to control for a placebo effect ), and compare the outcome with your experimental treatment.

You can assess whether it’s your treatment specifically that caused the outcomes, or whether time or any other treatment might have resulted in the same effects.

To test the effect of colors in advertising, each participant is placed in one of two groups:

  • A control group that’s presented with red advertisements for a fast food meal.
  • An experimental group that’s presented with green advertisements for the same fast food meal.

Random assignment

To avoid systematic differences and selection bias between the participants in your control and treatment groups, you should use random assignment .

This helps ensure that any extraneous participant variables are evenly distributed, allowing for a valid comparison between groups .

Random assignment is a hallmark of a “true experiment”—it differentiates true experiments from quasi-experiments .

Masking (blinding)

Masking in experiments means hiding condition assignment from participants or researchers—or, in a double-blind study , from both. It’s often used in clinical studies that test new treatments or drugs and is critical for avoiding several types of research bias .

Sometimes, researchers may unintentionally encourage participants to behave in ways that support their hypotheses , leading to observer bias . In other cases, cues in the study environment may signal the goal of the experiment to participants and influence their responses. These are called demand characteristics . If participants behave a particular way due to awareness of being observed (called a Hawthorne effect ), your results could be invalidated.

Using masking means that participants don’t know whether they’re in the control group or the experimental group. This helps you control biases from participants or researchers that could influence your study results.

You use an online survey form to present the advertisements to participants, and you leave the room while each participant completes the survey on the computer so that you can’t tell which condition each participant was in.

Although controlled experiments are the strongest way to test causal relationships, they also involve some challenges.

Difficult to control all variables

Especially in research with human participants, it’s impossible to hold all extraneous variables constant, because every individual has different experiences that may influence their perception, attitudes, or behaviors.

But measuring or restricting extraneous variables allows you to limit their influence or statistically control for them in your study.

Risk of low external validity

Controlled experiments have disadvantages when it comes to external validity —the extent to which your results can be generalized to broad populations and settings.

The more controlled your experiment is, the less it resembles real world contexts. That makes it harder to apply your findings outside of a controlled setting.

There’s always a tradeoff between internal and external validity . It’s important to consider your research aims when deciding whether to prioritize control or generalizability in your experiment.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

Prevent plagiarism. Run a free check.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is a Controlled Experiment? | Definitions & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/controlled-experiment/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, extraneous variables | examples, types & controls, guide to experimental design | overview, steps, & examples, how to write a lab report, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Aug 30, 2024
  • Network, Computing, Service

Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning

#Quantum Technology

  • 01. Introduction
  • 02. Network Operations and Communication Service Fault Diagnosis System
  • 03. Quantum Kernel Learning Algorithm and Proposed Method
  • 04. Error Suppression in Quantum Computers
  • 05. Simulation Experiment for Proposed Method Using Quantum Computers
  • 06. System Demonstration Experiment Using IBM Quantum Computer Hardware
  • 07. Conclusion

1.Introduction

SoftBank Corp. (hereinafter "SoftBank"), The University of Electro-Communications (hereinafter "UEC"), and Keio University (hereinafter "Keio") have successfully conducted a demonstration experiment of a communication service fault diagnosis system using quantum machine learning. In recent years, with the advancement of corporate DX (Digital Transformation) and the widespread adoption of remote work, along with the expectation that Beyond 5G/6G will realize a data-driven society through the combination of ultra-high-speed, large-capacity transmission technology and edge computing technology, the quality of communication networks supporting these developments has become a crucial indicator. As the demand for such networks expands, the configuration of communication equipment for service providers has become large-scale and complex. The automation of network operations using advanced computing technologies powered by artificial intelligence (AI), known as MLOps, is under consideration [1] - [4]. However, realizing MLOps with classical computers and their algorithms presents challenges in terms of energy consumption, computational complexity, and vulnerabilities. Therefore, there are high expectations for quantum computers to enhance and speed up these processes. Currently, most quantum computers available are NISQ (Noisy Intermediate-Scale Quantum) devices [5], which are mid-sized systems with around 100 qubits. To achieve practical performance with these devices, it is crucial to improve algorithms and develop error suppression and correction technologies. Research on quantum computer algorithms is primarily advancing in the fields of quantum chemistry, mathematical optimization, and machine learning, with performance validation using hardware like IBM's superconducting gate-based quantum computers becoming increasingly active [6][7][8]. In light of these developments, SoftBank, UEC, and Keio have conducted a demonstration experiment using IBM's superconducting gate-based quantum computer and Q-CTRL's error suppression system to implement a quantum machine learning-based communication service fault diagnosis system.

2. Network Operations and Communication Service Fault Diagnosis System

The commercial service network that forms the foundation for telecommunication services provided to customers consists of a core network connecting major cities nationwide like arteries, an area network configured at the regional level, and an access network connecting customer sites and mobile base stations. At the top level, these connect to international communication networks such as submarine cables via IX (Internet Exchange) points.

Image of the Commercial Service Network | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 1. Image of the Commercial Service Network

Network operations by telecommunications providers primarily focus on maintaining uninterrupted communication services for customers. This involves 24/7 service monitoring, fault isolation, and recovery, as well as maintenance work. Operators use operational systems to perform these tasks (Figure 2). Fault isolation, in particular, involves identifying the equipment hosting the affected service and determining the cause of the fault using a vast array of equipment commands (Figure 3).

Troubleshooting workflow | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure2. Troubleshooting workflow

Fault diagnosis using device command | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure3. Fault diagnosis using device command

In this study, we conducted a proof-of-concept experiment for a communication service fault diagnosis system using quantum machine learning. We used a dataset extracted from logs of systems operating in commercial networks. The dataset (Figure 4) has command types on the horizontal axis and fault patterns on the vertical axis. The plots correspond to 1 if the command execution result is abnormal, and 0 if normal. The colors of each plot correspond to seven types of fault causes. We use the 0-1 sequence on the horizontal axis as a feature vector. Machine learning is performed using the fault cause for each feature vector as training data, and the constructed model is used for fault diagnosis.

Dataset of Communication Service Fault Diagnosis | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 4. Dataset of Communication Service Fault Diagnosis

The communication service fault diagnosis system (Figure 5) operates by performing offline training using the dataset, and during online processing, it estimates the cause of faults for unknown feature vectors. The system is divided into parts processed by classical computers and quantum computers. In the offline training process, the dataset is first dimensionally reduced to match the number of qubits used in calculations and then normalized. For cross-validation, the dataset is split into learning and test sets, used to evaluate the machine learning model's performance. In this study, a 50% split ratio was used. Then, parameterization processing for qubits is performed to generate quantum circuits. In this study, the dataset was split with a ratio of 50%. Then, the data is parameterized for qubits, and a quantum circuit is generated. Up to this point, all processing is done on a classical computer. Following this, a quantum computer is used to generate a Gram matrix by performing exhaustive inner product calculations for the number of fault patterns in the training data. This Gram matrix is used as a kernel in a Support Vector Machine (SVM) machine learning model to estimate the fault causes.

Communication service fault diagnosis system | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 5. Communication service fault diagnosis system

Currently, for each fault diagnosis, all commands corresponding to the dimensions of the feature vector are executed. However, in our previous research [1], we developed a technology using deep reinforcement learning, which assigns a reward based on the confidence level of class convergence, to explore and execute only the commands necessary for identifying the fault cause.

3. Quantum Kernel Learning Algorithm and Proposed Method

Quantum kernel learning is expected to provide superior analytical performance compared to classical computers due to the rich expressiveness of qubits and the complexity in ultra-high-dimensional spaces of quantum entanglement.

Generally, kernel methods (Figure 6) refer to techniques that make data that cannot be linearly separated linearly separable by mapping it to a higher-dimensional feature space (kernel space). Quantum kernel learning, however, uses the quantum state space of quantum computers as the feature space. Figure 7 illustrates how classical data, mapped to the quantum state space, is separated based on its amplitude direction within the phase range of 0 to 2π.

Kernel method | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 6. Kernel method

Quantum kernel | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 7. Quantum kernel

In this study, we devised a proprietary quantum entanglement control circuit (patent pending) for kernel generation in quantum kernel learning, successfully enhancing the performance of quantum computers to accommodate more generalized data. By parameterizing the feature vectors of the input data into each qubit and adjusting the entanglement strength between adjacent qubits, we were able to control the mapping into quantum states that correspond to the characteristics of the input data. This enabled the implementation of a computational method that efficiently utilizes the quantum state space across the entire quantum circuit.

For kernel generation, the quantum computer performs exhaustive inner product calculations for combinations of the input data feature vectors x l , x m (Equation 1). For each Φ(x) in the gate operator U Φ(x) , the elements of each vector are mapped into the unitary space of n qubits, and a quantum circuit is generated (Equation 2).

For kernel generation, the quantum computer performs exhaustive inner product calculations for combinations of the input data feature vectors | the elements of each vector are mapped into the unitary space of n qubits, and a quantum circuit is generated (Equation 2) | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

We defined the conventional method using a quantum circuit where the feature vector to be computed is simply parameterized into X rotation gates, as shown in Figure 8. We then compared this with our proposed method, which is explained below.

Conventional method | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 8. Conventional method

In our proposed method, we control the mapping to quantum states according to input data characteristics by adjusting the entanglement strength between adjacent qubits (Figure 9). This realizes a computational method that efficiently utilizes the quantum state space across the entire quantum circuit (Figure 10).

Quantum Entanglement Control Circuit | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 9. Quantum Entanglement Control Circuit

Parametrized Energy-Efficient Quantum Kernel Learning Circuit | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 10. Parametrized Energy-Efficient Quantum Kernel Learning Circuit

In the quantum entanglement generation circuit of the gate operator U Φ(x) in the proposed method (Equation 3), the phase parameters Φ p,q (x) of the Z-rotation gate (Equation 4) are influenced by parameterized values that act on adjacent qubits with respect to the quantum entanglement strength. By defining a coefficient α for this, we made it possible to efficiently adjust the quantum entanglement effect.

In the quantum entanglement generation circuit of the gate operator U_(Φ(x)) in the proposed method (Equation 3), the phase parameters  Φ_(p,q) (x) of the Z-rotation gate (Equation 4) are influenced by parameterized values that act on adjacent qubits with respect to the quantum entanglement strength. By defining a coefficient α for this, we made it possible to efficiently adjust the quantum entanglement effect. | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

The kernel can be obtained by calculating the Gram matrix 𝐾 using a quantum computer for each parameterized quantum circuit (Equation 5).

The kernel can be obtained by calculating the Gram matrix 𝐾 using a quantum computer for each parameterized quantum circuit (Equation 5). | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

This study, as explained above, investigates how the proposed method of quantum entanglement control affects the classification performance in quantum kernel learning.

4. Error Suppression in Quantum Computers

Errors in quantum computers are caused by various factors such as decoherence, gate errors, readout errors, crosstalk, quantum phase errors, and thermal noise. While error suppression is relatively manageable when the physical model is well understood, it becomes increasingly difficult as the system scales up.

In this study, we successfully reduced quantum noise in NISQ machines significantly by using Q-CTRL's error suppression system, “Fire Opal”. Fire Opal is a software package designed to achieve AI-based error suppression and improve quantum algorithm performance on quantum hardware. It achieves a deterministic approach to error reduction without requiring additional execution overhead such as sampling or randomization. Using deep reinforcement learning, the hardware model is effectively learned, encapsulating environmental information from the quantum computer as reward information in terms of fidelity. The agent is operated by mapping this reward to actions related to control pulses and other environmental changes. For piecewise constant control waveforms, the optimal Hamiltonian is explored by repeating episodes concerning state observation cycles, with learning and calibration continuing until the optimal value is reached. This use of deep reinforcement learning enables effective quantum computer error suppression without prior knowledge of the error's physical model [11][12].

Optimization Through Deep Reinforcement Learning in Q-CTRL's Error Suppression System | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 11. Optimization Through Deep Reinforcement Learning in Q-CTRL's Error Suppression System

5. Simulation Experiment for Proposed Method Using Quantum Computers

We evaluated the proposed method using a tensor network simulator. For the 120-dimensional command sequence in the dataset, we performed dimensionality reduction to 10-50 dimensions to match the number of qubits being evaluated. We then assessed the distribution of classification estimation accuracy using 50% cross-validation across 100 split patterns. The proposed method showed superior performance compared to SVM using conventional quantum kernel learning and SVM using classical kernel learning. Comparing the average of evaluation results across all qubit numbers, conventional quantum kernel learning achieved 77%, our proposed quantum kernel learning achieved 81%, and classical kernel learning achieved 78%.

Evaluation of Classification Performance Using Tensor Network Simulation | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 12. Evaluation of Classification Performance Using Tensor Network Simulation

The following evaluation was conducted using a single split sample data for each number of qubits. The classification accuracy for each classical method ranged from 85% to 89%. Figure 13 evaluates the relationship between the 𝛼 parameter, which controls the quantum entanglement strength, and the estimation accuracy in the proposed method. This shows that there are optimal settings for the quantum entanglement strength and the number of qubits with respect to the training data.

α Parameter Dependence Characteristics of Estimation Accuracy | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 13. α Parameter Dependence Characteristics of Estimation Accuracy

In this study, we used a common α parameter for all qubit pairs. However, we anticipate that setting independent parameters for each qubit pair could yield learning performance that's more sensitive to detailed data features. For learning, we conducted parameter searches using pre-evaluation through simulation. While current state vector simulators can only handle calculations up to about 30 qubits, tensor network simulators can simulate larger qubit numbers, albeit with some approximation errors. As we move to even larger qubit numbers that simulations can't handle, evaluating ideal values becomes impossible. In such cases, we can consider models that operate while directly tuning parameters on quantum computer hardware.

6. System Demonstration Experiment Using IBM Quantum Computer Hardware

We evaluated the proposed method using IBM's gate-based quantum computer (IBM Quantum System One: IBM-Kawasaki 127 qubits), employing the optimal settings confirmed through simulation. The results of the comparative evaluation of fault cause inference performance in the communication service fault diagnosis system are shown in Figure 14. We compared a state-vector simulator, a tensor-network simulator, IBM's gate-based quantum computer alone, and IBM's gate-based quantum computer with error suppression applied. By applying error suppression to the quantum computer, we achieved an inference accuracy of 82% for fault causes using 30 qubits. The number of qubits used here is currently the world record for quantum kernel learning using a real quantum device. Additionally, optimal estimation accuracy was reached at 30 qubits, beyond which a degradation trend was observed. This degradation is likely due to the effects of quantum computer noise and the depletion of data samples.

Fault Cause Inference Performance Using Quantum Computers | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 14. Fault Cause Inference Performance Using Quantum Computers

Regarding the kernel obtained from the quantum computer, a comparison of the relative values of all elements of the Gram matrix at 30 qubits against the ideal values showed that error suppression resulted in uniform performance improvement.

Comparison of Relative Values Against Ideal Values for the Gram Matrix at 30 Qubits | Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | Quantum Technology | SoftBank Research Institute of Advanced Technology

Figure 15. Comparison of Relative Values Against Ideal Values for the Gram Matrix at 30 Qubits

7. Conclusion

In this study, we successfully demonstrated the practical performance of quantum computers by improving the quantum kernel learning algorithm and applying error suppression, using data from systems operating in SoftBank's commercial services. This achievement significantly contributes to the advancement of computational technology using quantum computers and their implementation in society. Moving forward, we will promote research aimed at expanding the application scope and advantages of quantum algorithms, improving computational performance through enhanced quantum hardware capabilities, and achieving scalability and integration across network architectures. Through these efforts, we aim to contribute to the early practical application and social implementation of quantum computing technology. This research outcome has been accepted as a paper for the Technical Session (QML) at the "IEEE International Conference on Quantum Computing and Engineering (QCE24)" to be held from September 15-20, 2024, where it is scheduled for presentation.

Publication

Research areas.

  • SoftBank Research Institute of Advanced Technology
  • Blogs | SoftBank Research Institute of Advanced Technology | About Us | SoftBank
  • Demonstration Experiment of a Communication Service Fault Diagnosis System Using Quantum Machine Learning | SoftBank Research Institute of Advanced Technology

Physical Review Research

  • Collections
  • Editorial Team
  • Open Access

Enhancing quantum state tomography via resource-efficient attention-based neural networks

Adriano macarone palmieri, guillem müller-rigat, anubhav kumar srivastava, maciej lewenstein, grzegorz rajchel-mieldzioć, and marcin płodzień, phys. rev. research 6 , 033248 – published 4 september 2024.

  • No Citing Articles
  • INTRODUCTION
  • PRELIMINARIES
  • RESULTS AND DISCUSSION
  • CONCRETE EXPERIMENTAL IMPLEMENTATION
  • CONCLUSIONS
  • ACKNOWLEDGMENTS

In this paper, we propose a method for denoising experimental density matrices that combines standard quantum state tomography with an attention-based neural network architecture. The algorithm learns the noise from the data itself, without a priori knowledge of its sources. Firstly, we show how the proposed protocol can improve the averaged fidelity of reconstruction over linear inversion and maximum likelihood estimation in the finite-statistics regime, reducing at least by an order of magnitude the amount of necessary training data. Next, we demonstrate its use for out-of-distribution data in realistic scenarios. In particular, we consider squeezed states of few spins in the presence of depolarizing noise and measurement/calibration errors and certify its metrologically useful entanglement content. The protocol introduced here targets experiments involving few degrees of freedom and afflicted by a significant amount of unspecified noise. These include NISQ devices and platforms such as trapped ions or photonic qudits.

Figure

  • Received 3 October 2023
  • Accepted 14 August 2024

DOI: https://doi.org/10.1103/PhysRevResearch.6.033248

experiment is a research method

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

  • Research Areas

Authors & Affiliations

  • 1 ICFO - Institut de Ciències Fotòniques , The Barcelona Institute of Science and Technology , 08860 Castelldefels, Barcelona, Spain
  • 2 ICREA , Pg. Lluis Companys 23, 08010 Barcelona, Spain
  • 3 NASK National Research Institute , ul. Kolska 12, 01-045 Warszawa, Poland
  • * These authors contributed equally to this work.
  • † Contact author: [email protected]
  • ‡ Contact author: [email protected]
  • § Contact author: [email protected]

Article Text

Vol. 6, Iss. 3 — September - November 2024

Subject Areas

  • Atomic and Molecular Physics
  • Quantum Physics
  • Quantum Information

experiment is a research method

Authorization Required

Other options.

  • Buy Article »
  • Find an Institution with the Article »

Download & Share

Schematic representation of the data pipeline of our QST hybrid protocol. Panel (a) shows data acquisition from a generic experimental set-up, during which the frequencies f are collected. Next, panel (b) presents standard density matrix reconstruction; in our paper, we test the computationally cheap LI method together with the expensive MLE, to better analyze the network reconstruction behavior and ability. Panel (c) depicts the matrix-to-matrix deep-learning strategy for Cholesky matrices reconstruction. The architecture herein considered combines convolutional layers for input and output and a transformer model in between. Finally, we compare the reconstructed state ρ ¯ ̂ with the target τ ̂ .

Evaluation of the QST reconstruction quality measured by the mean value of the Hilbert-Schmidt distance square D 2 ¯ HS between the target and the reconstructed state for different QST protocols, averaged over 1000 target states. In both panels, the best-performing setups are those that are as far right (better quality) and bottom (less costly) as possible. Panel (a) uses the number of measurements N trial to compare four QST protocols: linear inversion (LI, green dots, neural network enhanced MLE (MLE-NN, orange crosses), neural network enhanced LI (NN-LI, blue diamonds), and maximal likelihood estimation (MLE, red squares). We add an inset focusing in the undersampled regime, N trial ≤ 5 × 10 3 . Panel (b) shows the quality of reconstruction as a function of product N trial × N train for the latter two protocols and the network model proposed in Ref. [ 53 ] (violet triangles). Both panels depict resource costs on horizontal axes in different scenarios: in (a), the cost is the number of performed measurements, while in (b), the training phase is additionally counted as a cost. Our proposed protocol achieves competitive averaged HS reconstruction for the size of training data of an order of magnitude smaller than the method proposed in Ref. [ 53 ]. During models' training, we used N train = 2000 random pure states for the MLE-NN protocol, and N train = 5000 for the LI-NN. Lines are to guide the eye; shadow areas represent one standard deviation.

Two different simulations for out-of-distribution (OOD) inference. In each panel, we evaluate the normalized quantum Fisher information (QFI) for 100 four-qubit states as validation metric. The target, noiseless states are evolved according to the OAT dynamics given in ( 9 ), and depicted by the purple dotted line. For this OOD tests, the neural network was trained exclusively to learn statistical sampling noise. During inference, test data are permeated by depolarization and measurement (calibration) errors also. The green line represents the normalized QFI derived from reconstructions via the Linear Inversion (LI) algorithm; the red line illustrates the enhancement provided by the network when supplemented with LI algorithm reconstructions, underscoring the robustness of our protocol in mitigating noise effects.

Time evolution of the normalized QFI during the OAT protocol for L = 4 qubits system. Solid blue lines represent QFI calculated for target quantum states. The mean values of QFI calculated from tomographically reconstructed density matrices are denoted by green-dashed (reconstruction via LI), and red-dotted lines (reconstruction via neural network postprocessed LI outputs). Shaded areas mark one standard deviation after averaging over 10 reconstructions. Panels (a) and (b) correspond to LI protocol with SIC-POVM data, whereas (c) and (d) denote LI reconstruction inferred from Pauli measurements. In the upper row, the left (right) column corresponds to N trial = 10 3 ( 10 4 ) trials; in the lower row, the left (right) column reproduces the LI initial fidelity reconstruction of ∼ 74 % ( ∼ 86 % ) . The red lines represent the whole setup with neural network postprocessing of data from corresponding green lines, indicating improvement over the LI method. The neural network advantage over the bare LI method can be characterized by entanglement depth certification, as shown by the horizontal lines denoting the entanglement depth bounds ranging from the separable limit (bottom line, bold) to the genuine L -body limit (top line). In particular, the presence of entanglement, k ≥ 2 , is witnessed by QFI > L , as shown by the violation of the separable bound (bold horizontal line).

Comparison of the efficiency of QST reconstruction schemes evaluated using Hilbert-Schmidt distance square D 2 ¯ HS for transformer-based, 2-layer, and 4-layer attention-free CNN models, averaged in 1000 mixed states. All models share an equivalent number of training parameters. (a) Average reconstruction values for the 10 different LI preprocessed test datasets. Similarly to Fig.  2 , we vary the number of trials N trials to analyze the reconstruction efficiency and also use states of dimension 9 for a direct comparison. (b) The same analysis applied to the models trained on the MLE preprocessed data. To summarize, only for the MLE preprocessed data, the 4-layer CNN model can outperform the transformer-based for N trials = 10 6 , 10 5 , while for the LI preprocessed our network shows better outcomes.

Time evolution of the normalized QFI during the OAT protocol for four qubits. The dotted dark grey line represents the QFI calculated for the target quantum state, and the light grey dashed line is the QFI upon LI reconstruction (our minimal threshold). Panels (a) and (b) correspond to the QFI obtained for the states reconstructed by the 2 − layers and 4-layers CNN respectively. We observe that, firstly, the transformer-based model outperforms the CNN models at all times with reconstruction ability very close to the OAT target states. Secondly, the CNN models perform equivalently irrespective of the number of layers in the architecture as shown in panels (a) and (b) for 2 − layers and 4 − layers respectively when considering QFI as our reconstruction metric.

Averaged HS distance of reconstructed MLE from the HS ensemble with d = 9 as mixed according to Eq. ( I1 ) for different values of p (coloured lines). We highlight the limiting cases, namely p = 0 (solid line); the average with respect I / d , and p = 1 (dashed): the MLE result. The envelope of such a family of lines is marked with a dotted line. Such bound can be realized with an optimal p * , which depends on the number of trials via the reconstructed { ρ ̂ MLE } .

Geometric interpretation of the optimal depolarization of the MLE state, such as incorporating the statistical noise stemming from a finite number of experimental runs.

Average reconstruction distance as a function of the mixing parameter p for a given set of number of trials. We verify the parabola curves of Eq. ( I2 ) and the nontrivial minima.

Action of the neural network as a conditional debiaser. (a) Inference of the state τ ̂ by many finite-size realization { ρ ̂ f } f not necessarily a proper state (i.e., it might be outside S ). (b) Disregarding the nonphysical realizations results in a skewed conditional distribution whose mean is displaced from the true state. The action of the neural network is then to shift back the mean to the target state by drifting the distribution.

Sign up to receive regular email alerts from Physical Review Research

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

  • Forgot your username/password?
  • Create an account

Article Lookup

Paste a citation or doi, enter a citation.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 September 2024

Enhanced continuous atmospheric water harvesting with scalable hygroscopic gel driven by natural sunlight and wind

  • Xinge Yang 1 ,
  • Zhihui Chen 1 ,
  • Chengjie Xiang   ORCID: orcid.org/0000-0001-9069-2052 1 ,
  • He Shan   ORCID: orcid.org/0000-0002-9105-3006 1 &
  • Ruzhu Wang   ORCID: orcid.org/0000-0003-3586-5728 1  

Nature Communications volume  15 , Article number:  7678 ( 2024 ) Cite this article

Metrics details

  • Engineering
  • Materials for devices
  • Renewable energy

Sorption-based atmospheric water harvesting (SAWH) has received unprecedented attention as a future water and energy platform. However, the water productivity of SAWH systems is still constrained by the slow sorption kinetics at material and component levels and inefficient condensation. Here, we report a facile method to prepare hygroscopic interconnected porous gel (HIPG) with fast sorption-desorption kinetics, high scalability and stability, and strong adhesion property for highly efficient SAWH. We further design a solar-wind coupling driven SAWH device with collaborative heat and mass enhancement achieving continuous water production. Concentrated sunlight contributes to enhancing the desorption and condensation synergistically, and natural wind is introduced to drive the device operation and improve the sorption kinetics. The device demonstrated record high working performance of 14.9 L water m −2 day −1 and thermal efficiency of 25.7% in indoor experiments and 3.5–8.9 L water m −2 day −1 in outdoor experiments by solar concentration without any other energy consumption. This work provides an up-and-coming pathway to realize highly efficient and sustainable clean water supply for off-grid and arid regions.

Similar content being viewed by others

experiment is a research method

Radiative cooling sorbent towards all weather ambient water harvesting

experiment is a research method

Bridging materials innovations to sorption-based atmospheric water harvesting devices

experiment is a research method

Scalable and efficient solar-driven atmospheric water harvesting enabled by bidirectionally aligned and hierarchically structured nanocomposites

Introduction.

Freshwater scarcity is a global challenge threatening the sustainable development of human society. It is estimated that two-thirds of the global population will live under water-stressed conditions by 2025 1 . Luckily, the atmosphere contains 12,900 trillion liters of water in the form of water vapor and droplets, equivalent to ~10% of all fresh water in lakes on earth 2 . Harvesting water from ubiquitous atmospheric water has been a promising technology to solve water shortage crisis 3 . Furthermore, the arid regions generally receive solar irradiation higher than the average, endowing the solar-driven SAWH systems with the potential to realize off-grid water supply 4 , 5 .

One of the most important factors affecting the working performance of sorption-based atmospheric water harvesters is the water sorption performance of sorbents. Researchers have made tremendous efforts to develop state-of-the-art sorbents such as metal-organic frameworks (MOFs) 6 , 7 , 8 , 9 , hydrogels 10 , 11 , 12 , liquid sorbents 13 , 14 and composite sorbents 15 , 16 , 17 , 18 . Among them, salt-based composite sorbents composed of hygroscopic salt and porous matrix have attracted much attention due to their high water sorption capacity in a wide range of relative humidity (RH) 19 . Hygroscopic salts possess high water uptake, but often deliquesce at a certain RH, where the agglomeration of salt crystals causes the formation of the passivation layer, leading to sluggish sorption kinetics. The subsequent solution leakage results in a weak cycling stability 20 . The primary strategy to address this challenge is adopting porous matrixes including MOFs 21 , 22 , hollow spheres 23 , 24 , fibrous membranes 25 , 26 , 3D skeletons 15 , 27 , 28 and hydrogels 29 , 30 , 31 , 32 to disperse and confine the hygroscopic salts, wherein hydrogels are favored for their high tunability and strong water retention ability due to the swelling characteristic 33 . However, the internal structure of hydrogels is generally not conducive to the water vapor transport, resulting in slow sorption kinetics of hygroscopic gels 34 .

Although various state-of-the-art sorbents have been developed for SAWH, fully utilizing the performance of sorbents to serve practical applications of SAWH systems is a grand challenge. Due to the relatively low power density of solar energy, the temperature that the sorbent can reach during desorption is not high, leading to a low dew point temperature of the humid air in the desorption chamber and thus inefficient condensation. Most of the previously reported studies addressed this issue by adopting active cooling such as forced air cooling 15 , vapor compression refrigeration 35 , 36 and thermoelectric refrigeration 37 , or employing electric heating to increase the desorption temperature 38 , 39 , both of which rely on electricity. However, most remote arid areas may not have well-developed power infrastructure and the conversion efficiency of only around 20% for photovoltaic panels limits the energy efficiency improvement of SAWH 40 , so it is urgent to find an effective strategy to realize efficient condensation without consuming electricity 41 . Besides, although multicyclic sorption-desorption and continuous operation modes with simultaneous sorption and desorption have been proposed to solve the mismatch between the sorption and desorption rates for enhancing the daily water yield 42 , 43 , most of them are achieved through manual operation or electric drive 15 , 24 , 44 . The former is not an ideal operation mode with a lot of inconvenience, while the latter is not suitable for the off-grid and distributed scenarios. Therefore, realizing highly efficient SAWH requires comprehensive consideration of sorbents, heat and mass transfer, components of the device and operation strategies of the system.

Herein, we developed a super hygroscopic interconnected porous gel (HIPG) with fast sorption and desorption kinetics, high scalability, reliable water retention ability, and strong adhesion property appropriate for continuous atmospheric water harvesting. The HIPG consisting of hydroxypropyl methylcellulose (HPMC) and sodium polyacrylate (PAAS) matrix, lithium chloride (LiCl) and photothermal component titanium nitride (TiN) nanoparticles was prepared by foaming-drying method, which was time-saving and suitable for large-scale production. The generated interconnected porous structure with high pore volume and hierarchical pores reduced the water vapor diffusion resistance within the HIPG, accelerating the water vapor transport and thus leading to fast sorption and desorption kinetics. As a result, the HIPG showed high water uptake of 1.01, 2.03, 6.83 g g −1 under 30%, 60%, 90% RH, and could reach 93.2%, 80.5%, and 76.4% of the equilibrium sorption capacity within 30 minutes under 25 °C and 30%, 45%, 60% RH. For desorption, the HIPG also demonstrated rapid kinetics, which could release 87.7% of the equilibrium water sorption capacity within 30 minutes under 1 sun irradiation. To realize high water productivity with an ideal operation mode of SAWH, we designed a solar-wind coupling driven continuous SAWH device with enhanced thermal and mass transfer design. An efficient and cost-effective strategy was proposed to realize the synergetic enhancement of desorption and condensation through solar concentration, accelerating the SAWH cycle and improving water productivity. The wind energy was subtly introduced to drive the continuous operation of the device, speeding up the sorption kinetics simultaneously. Consequently, the SAWH device delivered extraordinary working performance of 4050 mL water  kg sorbent −1  day −1 , 14.9 L water  m −2  day −1 and thermal efficiency as high as 25.7% in indoor experiments (~57% RH) and 3.5–8.9 L water  m −2  day −1 in outdoor experiments by solar concentration without any other energy consumption, superior to previous SAWH research. This work demonstrated a HIPG-based solar-wind coupling driven continuous SAWH device with high thermal efficiency and high water productivity, providing a promising pathway to realize highly efficient and sustainable clean water supply for off-grid and arid regions.

Synthesis and characterization of HIPG

Salt-based hygroscopic gels perform well in water sorption capacity, but suffer from slow sorption kinetics caused by their high internal water diffusion resistance, especially for the massive sorbents with large packing thickness 45 . To accelerate the vapor transport for highly efficient SAWH, we developed an interconnected porous gel through the foaming-drying method. HPMC, a non-ionic surfactant, was added into the mixed solution of LiCl and TiN nanoparticles, enabling it to be fully foamed via mechanical agitation. The foam structure was relatively stable due to the reduced surface tension and enhanced solution viscosity. Benefiting from the highly tailorable properties of the gel, PAAS was introduced as an anionic surfactant and thickener, further decreasing the surface tension and improving the viscosity to stabilize the foam gel. Moreover, PAAS could prevent salt solution leakage due to the swelling behavior and provide numerous hydrophilic functional groups to enhance the adhesive force between the HIPG and the substrate for continuous water harvesting. The gelation of the HPMC-PAAS gel was achieved by association through hydrogen bonding (Supplementary Fig.  5 ). During the drying process of the foam gel, the HPMC-PAAS formed a matrix, and the water escaping channels became the water transfer channels (Fig.  1A ). This foaming-drying method was time-saving and didn’t require the low-temperature vacuum environment, which was suitable for large-scale production.

figure 1

A The schematic of the structure of HIPG and the water vapor transport within the HIPG. B , C The SEM images of HIPG from the top view ( B ) and high magnification ( C ). D , E The SEM images of HIPG from the cross-sectional view ( D ) and high magnification ( E ). F The pore size distribution of HIPG. G The concentration changes of PM 2.5 and PM 10 detected by the detectors located in the receiving cavity for different porous matrixes over time. H The FTIR patterns of HIPG at ~23 °C, ~60% RH. I The XRD patterns of each component and HIPG at different temperatures.

The top view scanning electron microscopy (SEM) image showed the interconnected porous structure with micron-sized pores of the HIPG (Fig.  1B ). A relatively uniform pore size distribution was observed in the partially enlarged figure of the top view SEM image (Fig.  1C ). Besides, the sectional view SEM image also exhibited the interconnected porous structure with high pore density ascribed to the excellent bubble stability of the foam gel (Fig.  1D ). The SEM images with higher magnification indicated the presence of the pores ranging from hundreds to thousands of nanometers on the pore walls, which could also serve as water vapor transport channels (Fig.  1E , Supplementary Fig.  6 ). To obtain the specific pore size distribution of HIPG, the 3D X-ray microscope (also known as micro-CT) was adopted to scan a HIPG sample (Supplementary Fig.  7 ). Through the statistical analysis of micro-CT images, the main pore sizes of HIPG were concentrated between 150 and 300 μm as shown in Fig.  1F . To further characterize the interconnected porous structure of HIPG and its effect on internal diffusion resistance, we conducted a particulate diffusion test (Supplementary Fig.  8 ). The results in Fig.  1G indicated that the particles with the diameter less than 10 μm could continuously pass through the HIPG, which could serve as direct evidence for the interconnected porous structure of HIPG. Furthermore, the particulate matter 2.5 (PM 2.5 ) and particulate matter 10 (PM 10 ) concentration signals were detected much earlier, and the time to reach the upper detection limit was much shorter for HIPG than the commonly used porous matrixes such as melamine foam (MF) and activated carbon fiber felt (ACFF), indicating a smaller internal diffusion resistance for HIPG. The high degree of pore interconnectivity enabled the water vapor transport to occur in a nearly straight line, which also meant a low tortuosity. Together with high porosity, the internal structure of HIPG lowered the water vapor diffusion resistance significantly (Supplementary Note  1 ).

The energy dispersive spectrometer (EDS) results illustrated that TiN nanoparticles and LiCl were uniformly distributed in the HIPG (Supplementary Fig.  9 ), which could cut down the heat losses by localized heating and improve the sorption kinetics of HIPG. The Fourier transform infrared (FTIR) spectra for HPMC showed a peak at 3450 cm −1 due to the hydroxyl group (–OH) stretching. In HIPG, the peak resulting from –OH stretching moved to a lower wavenumber, indicating the enhancement of the hydrogen bonding network, which was attributed to the effect of LiCl on the interactions between the polymer network and water molecules (Fig.  1H ). The X-ray diffraction (XRD) pattern shown in Fig.  1I indicated that the chemical desorption of LiCl·H 2 O to LiCl occurred when the HIPG was heated to 90 °C, that is, the HIPG completely desorbed.

Water sorption-desorption performance assessment

Figure  2A presents the water sorption isotherms of PAAS and HIPG. The three-stage water sorption process of LiCl, namely the chemisorption of LiCl, the deliquescence of LiCl·H 2 O, and the absorption of LiCl solution contributed to the high water uptake of HIPG over a wide RH range. In the initial phase of the sorption process, water vapor is adsorbed on the salt crystal’s surface through the hydration effect. Then the salt at the surface gradually dissolves in the captured water. The water vapor continues to be adsorbed, driven by the water vapor pressure difference between the surrounding air and the liquid film at the surface, until the salt solution is completely formed. In the salt solution, each cation and anion is surrounded by a spherical hydration shell through the coordination effect or electrostatic interaction with water molecules. The coordinated water in the hydration shell can be further connected via intermolecular hydrogen bonding to form a dynamic network. The water vapor sorption capacity of HIPG below 60% RH mainly came from the LiCl. The water sorption of PAAS mainly consisted of the physisorption of hydrophilic functional groups through hydrogen bonds and subsequent multilayer adsorption, which generally occurred after the RH reached a certain value, leading to the sudden increase in water sorption capacity of PAAS at 70% RH 46 . The PAAS showed the water uptake of 1.80 and 3.60 g g −1 under RH of 80% and 90% at 25 °C, enlarging the water sorption capacity of the HIPG under high RH conditions such as nighttime or humid weather. As a result, the HIPG delivered excellent water uptake of 0.64, 1.01, 1.39, 2.03, 3.27, 6.83 g g −1 under RH of 15%, 30%, 45%, 60%, 75%, 90% at 25 °C, indicating that HIPG has wide climate adaptability for SAWH.

figure 2

A The water sorption isotherms of HIPG and PAAS at 25 °C. B The dynamic water sorption processes of bulk HIPG at the same temperature of 25 °C and different RHs of 30%, 45%, 60%, 75%, and 90%. C The experimental and simulated results of dynamic sorption processes for the single-sided sorption and quasi-double-sided sorption under 25 °C and 60% RH. The water uptake was normalized by dividing the equilibrium sorption capacity. D The UV–vis–NIR absorption spectrum of HIPG. E The solar-driven desorption processes of HIPG with sorption equilibrium at 25 °C, 60% RH under the same ambient temperature and RH and different solar irradiation intensities. F The water desorption isobars of HIPG at water vapor pressure of 1.90 and 3.17 kPa. G Thirty water sorption–desorption cycling tests of HIPG at 25 °C, 60% RH (1.90 kPa) for sorption and 90 °C, 4.2% RH (3.17 kPa) for desorption. H The comparison of water sorption performances of bulk HIPG and other state-of-the-art salt-based composite sorbents 20 , 24 , 29 , 31 , 44 , 48 , 49 . I Psychrometric chart showing the water desorption-condensation processes for the continuous SAWH device with and without solar concentration.

The dynamic water sorption processes of the HIPG are shown in Fig.  2B . We employed bulk HIPG samples with a scale-up dimension of 10 × 5 × 0.3 cm 3 to conduct the water vapor sorption tests, thus, the heat and mass transfer were closer to practical application scenarios compared with the milligram-scale sorbent samples in synchronous thermal analyzer (STA) tests (Supplementary Table  1 ). Benefiting from the reduced internal diffusion resistance brought by the interconnected porous structure, the bulk HIPG showed ultrafast water sorption kinetics, delivering a rapid sorption rate of 0.03, 0.08, 0.09, 0.14, 0.18, 0.20 g g −1  min −1 initially under RH of 15%, 30%, 45%, 60%, 75%, 90%, respectively (Supplementary Fig.  10 ), and almost reaching the sorption equilibrium within 60, 40, 60, 100, 150 min for RH of 15%, 30%, 45%, 60%, 75%, respectively. Besides, 78.1%, 93.2%, 80.5%, and 76.4% of the equilibrium sorption capacity for the bulk HIPG could be achieved within 30 minutes under 25 °C and RH of 15%, 30%, 45%, and 60%, respectively, which is favored to accelerate the water harvesting cycle and improve the water productivity.

In addition to improving the sorption kinetics of the HIPG through pore engineering, we also optimized the structural arrangement of the sorption bed to enhance the sorption rate of the HIPG. The sorption bed was built by attaching the dust-free paper loaded with HIPG to a polytetrafluoroethylene (PTFE) mesh conveyor belt. The breathability of dust-free paper and mesh conveyor belt allowed the HIPG to capture water vapor from both its front and back, expanding the contact area between the HIPG and water vapor. The sorption kinetic curves for both single-sided sorption and quasi-double-sided sorption were recorded in Fig.  2C . The quasi-double-sided sorption case delivered an ultrahigh water uptake of 1.55, 1.79 g g −1 under 25 °C and 60% RH within 30, 60 minutes, 35.2%, 22.1% higher than the single-sided one, respectively. Therefore, the structure design of quasi-double-sided sorption enhanced the sorption kinetics of the sorption bed, benefiting the improvement of the daily water yield of the atmospheric water harvester. Moreover, we simulated the single-sided and quasi-double-sided water sorption behaviors of the HIPG at 25 °C and 60% RH based on the two-concentration model (Supplementary Note  2 ), which were in satisfactory agreement with the experimental data (Fig.  2C ). The simulation results for quasi-double-sided water sorption process of the HIPG at 25 °C and 30% RH also agreed well with the experimental results, as shown in Supplementary Fig.  12 , verifying the accuracy of the model.

Figure  2D shows the ultraviolet–visible–near infrared (UV–vis–NIR) absorption spectra of the HIPG. Due to the localized surface plasmon resonance effect of the photothermal component TiN nanoparticles 47 , the HIPG exhibited extremely high absorbance of up to 98% throughout the solar spectrum and possessed high photothermal conversion efficiency, which benefited fast water desorption. The water desorption profiles under various solar irradiation intensities were investigated, as shown in Fig.  2E . Profiting from the excellent photothermal performance of the TiN nanoparticles, the HIPG could release 87.7%, 94.1%, 98.5%, and almost 100% of the equilibrium sorption capacity of water within 30, 15, 10, 10 min under 1, 2, 3, 4 sun irradiation, respectively. Especially for the light-concentrated cases, ultrafast desorption kinetics were obtained owing to the enormous water vapor pressure difference between the salt solution and the surrounding air caused by the high temperature of HIPG. Meanwhile, the high temperature of HIPG resulted in an extremely low RH of the local air around the HIPG, leading to a lower equilibrium sorption capacity. The water desorption isobars of the HIPG were evaluated at two typical water vapor partial pressures of 1.90 kPa (25 °C, 60% RH) and 3.17 kPa (25 °C, 100% RH), as shown in Fig.  2F . The former corresponded to the open desorption case because the moisture content of the air throughout the day is stable, while the latter matched the closed desorption case considering that the water vapor partial pressure of the air around the sorbent approximately equals to that of the air close to the condensing surface when reaching desorption equilibrium with the assumption of condensation temperature of 25 °C. The water desorption could be divided into three stages: the desorption of water bonded with the polymer matrix and the water evaporation from the LiCl solution, the crystallization of LiCl hydrate, and the chemical desorption of LiCl hydrate.

The cycling stability of sorbents is of great concern for AWH, especially for long-term practical applications. Therefore, we evaluated the cycling stability of the HIPG under two kinds of sorption-desorption conditions by performing sorption and desorption cycles thirty times (Fig.  2G and Supplementary Fig.  13 ). One was at 25 °C, 60% RH for sorption, and 90 °C, 3.17 kPa (corresponding to the condensation temperature of 25 °C) for desorption. The other was at 25 °C, 90% RH for sorption, and 90 °C, 1.90 kPa for desorption. The results indicated that there was almost no attenuation in the equilibrium sorption capacity of the HIPG after experiencing dozens of cycles. Even under RH of up to 90%, the HIPG didn’t show any solution leakage after multiple sorption-desorption cycling tests, which was attributed to the swelling characteristic of HIPG and capillary force caused by the porous structure. Additionally, the sorption-desorption kinetic characteristics were also stable after dozens of cycles, further demonstrating the excellent cycling stability of HIPG (Supplementary Figs.  14 and 15 ).

Figure  2H and Supplementary Fig.  16 present the water sorption performance comparison of bulk HIPG and other state-of-the-art salt-based composite sorbents under 30% and 60% RH 20 , 24 , 29 , 31 , 44 , 48 , 49 . Benefiting from the higher salt content, the bulk HIPG exhibited higher water sorption capacity than other state-of-the-art salt-based composite sorbents. Additionally, the normalized sorption kinetic comparisons showed that the bulk HIPG possessed faster water sorption kinetics than other state-of-the-art salt-based composite sorbents (Supplementary Fig.  17 ), which was ascribed to the lower internal water vapor diffusion resistance resulted by the interconnected porous structure of HIPG.

To bridge the gap in water harvesting performance between the sorbents and SAWH systems, we proposed an efficient and cost-effective strategy to enhance water desorption and condensation synergistically through solar concentration. As shown in the psychrometric chart (Fig.  2I ), with larger solar irradiation flux input, the desorption rate and the temperature of the desorbed water vapor would go up, and the moisture content of the humid air in the desorption chamber rose, thereby raising the dew point temperature of the humid air compared to the situation without solar concentration. Meanwhile, an aluminum condenser with fins was used to reduce the temperature lift of the condenser during the condensation process by enlarging its heat capacity and enhancing the convective heat transfer between the condenser and the environment. When the humid air drops to its corresponding dew point temperature, the temperature difference between the humid air and the condenser surfaces becomes larger for the solar concentration case, thus accelerating the condensation heat release.

Indoor SAWH demonstration

According to the fast sorption and desorption kinetics of the HIPG, a solar-wind coupling driven continuous SAWH device was designed, as shown in Fig.  3A . This SAWH device was composed of a desorption chamber, a condenser, a sorption bed, and gearing. The sorbent was heated by solar irradiation and released high-temperature water vapor. Then, the water vapor was transferred into the condenser to be cooled and condensed into liquid water. The sorbent performed cyclic movement under the drive of the gearing to achieve continuous sorption and desorption. Considering the mismatch between the sorption and desorption rates (Supplementary Fig.  18 ), the sorption area was expanded by adopting two rollers spaced at a certain distance to hold the conveyor belt-type sorption bed. Benefiting from the high scalability of the preparation method of HIPG, the HIPG with a length of ~79 cm could be prepared in a single synthesis process and shaped into desired patterns (Supplementary Fig.  19 ).

figure 3

A The structure schematic of the continuous SAWH device. B The adhesion properties of HIPG on various substrates, including aluminum, stainless steel, silica glass, acrylic, wood, and PTFE (weight: the big one: 100 g; the small one: 20 g). C The heat transfer analysis of the continuous SAWH device during the desorption-condensation processes.

To achieve continuous SAWH with rotatable form, an important thing is that the sorbent must be firmly attached to a substrate with cyclic motion. The HIPG exhibited strong adhesion properties to various substrates such as acrylic, silica glass, stainless steel, aluminum, wood, and PTFE (Supplementary Figs.  20 and 21 ). An acrylic plate with a cross-section of 2 × 1 cm 2 in contact with the HIPGs attached to the aforementioned substrates was capable of suspending weights of 120 g, demonstrating the strong adhesion of the HIPG to various substrates (Fig.  3B ). The strong adhesive force came from the numerous hydrogen bonds formed between the HIPG and non-metallic substrates, the coordination bonds formed between the carboxylate group from the HIPG and metal ions in metallic substrates 50 , and the mechanical interlocking effect with rough surfaces 51 , achieving secure attachment of the HIPG with no adhesive.

As shown in Fig.  3C , the mesh conveyor belt made of PTFE with a thermal conductivity as low as 0.28 W m −1  K −1 could effectively reduce the conduction heat loss from the sorbent to the conveyor belt ( Q cond-sub ) and ensure more energy was utilized for water desorption. The rollers in contact with the conveyor belt were made up of a hollow circular tube and acrylic plugs at both ends, and its interior was pumped to less than 5 Pa to further suppress the conduction heat loss. Moreover, the desorption chamber was made by a double-layer highly transparent acrylic, with a vacuum (<5 Pa) between the two layers, which could cut down the convection and conduction heat transfer from the interlayer to the outer layer ( Q cond/conv ) and uplift the temperature of the interlayer, thus diminishing the heat loss of the sorbent caused by the convective and radiation heat transfer between the sorbent and the transparent cover ( Q conv/rad-cov ). Different from many previous studies that directly used the sunlight transmission cover as the condensing surface, the device in this work adopted a split design to separate the condenser from the desorption chamber with the help of a PV-driven turbofan, which could prevent condensing surfaces from being heated by radiative heat emitted by the sorbent and avoid the optical loss (~30%) caused by the mist from condensation (Supplementary Fig.  23 ). The PV-driven turbofan could not only transmit the desorbed water vapor from the desorption chamber to the condenser efficiently but also enhance the condensation heat transfer ( Q conv-cond ) between the water vapor and condenser by enlarging the convective heat transfer coefficient through airflow disturbance. A small-size PV panel was placed above the condenser through a thermal insulation support frame, which could both power the turbofan and prevent the condenser being heated by sunlight.

To realize convenient SAWH for off-grid and distributed scenarios, the natural wind was designed to be utilized for driving the gearing to achieve continuous operation of the device and accelerate the sorption kinetics of the sorption bed, consequently increasing the daily water yield of the device. We simulated the effect of the wind speed on the dynamic water sorption kinetics of the sorption bed under 25 °C and 60% RH by employing the two-concentration model (Supplementary Fig.  24 ). The results indicated that the sorption rate of the sorption bed improved with increasing wind speed. However, when the wind speed exceeded 2 m s −1 , the sorption rate tended to be stable as the convective transport resistance on the external surface R surf played little role in the sorption kinetics of the sorption bed.

Figure  4A presents the digital photo of the solar-wind coupling driven continuous SAWH device during the indoor tests. The duration of a single water collection experiment was set at 8 h according to the general sunrise and sunset routine. To ensure the comparability of experimental results, experiments with the same cycle time were conducted under similar ambient temperature and RH conditions (Supplementary Fig.  25 , 24–27 °C, 55–60% RH). As shown in Fig.  4B , with the solar irradiation intensity increased from 1 to 4 kW m −2 , the temperature difference between the sorbent in the desorption region and the condenser outer surface enlarged from 23 to 55 °C benefitting from the separation arrangement of the desorption chamber and condenser, leading to a larger driving force for water vapor condensation (Figs.  S26 and 4E ). Figure  4C presents the water production performance of this continuous SAWH device under various solar irradiation intensities. A total amount of 60.8 g of water was collected within 8 h under 4 suns (Figs.  S28 and S29 ), corresponding to the ultrahigh water production rate of 1.86 L water  m −2  h −1 .

figure 4

A The digital photo of the continuous SAWH device. 1-support frame, 2-PV panel, 3-water collector, 4-shading panel, 5-shading cotton. Scale length: 5 cm. B The temperature of the sorbent and condenser outer surface after reaching the thermal equilibrium during the water production tests under different solar intensities. Error bar: standard deviation (SD). C The mass changes of the collected water under different solar irradiation intensities over 8-hour indoor tests. D The mass changes of the collected water with different cycle times under 4 suns over 8-hour indoor tests. E The temperature evolutions of different positions of the continuous SAWH device with a cycle time of 30 minutes under 4 suns during the 8-hour indoor test. F and G The simulation results of the water desorption-condensation processes. The temperature ( F ) and water vapor partial pressure ( G ) distributions in the device. H The water yield and water production rate of the continuous SAWH device under 4 suns over seven-day cycles. I Water collection performance comparison of our work and other solar-driven SAWH devices 6 , 7 , 15 , 24 , 25 , 52 , 53 , 54 .

Besides the solar irradiation intensity, the cycle time is also a key factor affecting the water yield of the continuous SAWH device. We studied the effect of the cycle time on the water yield of the continuous SAWH device under 4 sun irradiations. The results shown in Fig.  4D proclaimed that when the cycle time was 30 min (i.e., the corresponding rotational speed of the sorption bed was 2 r h −1 ), the continuous SAWH device possessed the highest water yield of 4.05 g g −1 within 8 h. A shorter cycle time corresponded to a lower desorption temperature and a smaller temperature difference for condensation, thus leading to a reduction in the desorption and condensation rate. When the cycle time was too long, as the driving force of the desorption process gradually decreased, the desorption rate reduced significantly, consequently affecting the water production. Only when the sorption, desorption, and condensation reached a dynamic equilibrium could the water yield of the continuous SAWH device be directly proportional to time. Therefore, each solar irradiation intensity corresponded to an optimal cycle time to maximize the daily water yield.

Figure  4E recorded the temperature evolutions at various locations of the continuous SAWH device with a cycle time of 30 min under 4 suns. It took approximately one hour for the device to reach thermal equilibrium. A maximum temperature of the sorbent up to 95.5 °C was achieved attributed to the solar concentration and good thermal insulation of the device. With the advanced thermal design of the condenser, the outer surface of the condenser reached its maximum temperature as low as ~40 °C. Benefiting from the enhanced convection heat transfer caused by the turbofan, the temperature of the air inside the condenser (the location shown in Fig.  3A ) merely raised to about 51 °C. Additionally, the temperature of the sorbent located in the sorption region was measured by an IR camera and was only 5 K higher than the ambient temperature (Supplementary Fig.  30 ) due to the shading cotton, causing little impact on the water sorption performance of the sorbent.

To better investigate the heat and mass transfer process inside the device, we employed the COMSOL software to simulate the desorption-condensation process of the device (Supplementary Note  2 ). The simulated temperature and water vapor distribution inside the device were shown in Figs.  4F, G , respectively, which were in good agreement with the experimental results. Due to the separation arrangement of the desorption chamber and the condenser, a temperature difference of up to 55 °C was maintained between the sorbent and the condenser, leading to a higher desorption and condensation rate (Fig.  4F ). Moreover, the highest water vapor partial pressure was 19.7 kPa near the sorbent, while the water vapor partial pressure close to the condenser was about 6.5–9.3 kPa, which generated a strong driving force to promote the water vapor transport from above the sorbent to the condenser (Fig.  4G ). With the synergetic heat and mass transfer enhancement and optimized cycle time, the device delivered a thermal efficiency as high as 25.7% including the PV panel power for driving the turbofan (i.e., 31.7% without the inclusion of the turbofan power), which is twice that of other solar-driven SAWH devices based on solid sorbents (Supplementary Table  2 ).

To test the stability of the HIPG and the continuous SAWH device, we conducted indoor experiments under comparable ambient temperature and RH conditions (23–27 °C, 54–61% RH) for 7 consecutive days. As shown in Fig.  4H , the continuous SAWH device delivered a stable water yield ranging from 57.5 to 64.5 g within 8 h, corresponding to the water production rate of 1.76–1.98 L water  m −2  h −1 , demonstrating the excellent cycling stability of the whole device. We also conducted the one-sun water collection experiments to demonstrate the superiority of the structural design further (Supplementary Fig.  32 ). The continuous SAWH device generated a high water production rate of 0.264 L water  m −2  h −1 and high thermal efficiency of 12.1% (18.0% without including the turbofan power) with a cycle time of 1.5 h, superior to the previously reported SAWH devices (Fig.  4I ) 6 , 7 , 15 , 24 , 25 , 52 , 53 , 54 .

Practical outdoor SAWH tests

To further demonstrate the performance of this continuous SAWH device in practical applications, the outdoor tests were conducted at Shanghai Jiao Tong University. We adopted the natural wind to drive the gearing to achieve continuous operation of the device. After practical testing, we found that a wind speed larger than 1 m s -1 could drive the conveyor belt-type sorption bed to rotate continuously. Supplementary Fig.  33 presented the setup of the continuous SAWH device in outdoor tests. We selected the period from 10:00 to 13:00 to conduct outdoor tests due to the relatively stable solar irradiation intensity (Fig.  5A ). The ambient temperature was about 27–29 °C, and the environmental RH fluctuated between 25.6% and 33.5%. The solar irradiation intensity received by the sorbent was about 3.5–4 suns with the concentration of the Fresnel lens. As shown in Fig.  5B , it took about 50 min for the condenser to increase the temperature from the ambient temperature. Then, the temperatures of the air inside the condenser and the condenser’s outer surface fluctuated around 50 and 39 °C, respectively, due to the variable cycle time caused by variable wind speed. As a result, the solar-wind coupling driven continuous SAWH device delivered a water yield of 0.98 g g −1 within 3 h and a record high average water production rate of 1.20 L water  m −2  h −1 under ~3.5 to 4 sun irradiations without any active energy input (Fig.  5C ).

figure 5

A The evolutions of natural solar irradiation intensity (without concentration), wind speed, ambient temperature, and RH over the 3-hour outdoor test. B The temperature evolutions of the air inside condenser, the condenser outer surface, and the environment during the 3-hour outdoor test. C The accumulated specific water yield and water production rate over the 3-hour outdoor test. Error bar: SD. D The comparison of water production rate of our continuous SAWH device and other previously reported state-of-the-art solar-driven continuous SAWH devices 13 , 15 , 24 .

To further confirm the cycling stability of the gel sorbent and the SAWH device in outdoor environments, we conducted outdoor water collection experiments for 15 days in May 2024. The operation time ranged from 5.5 to 7.5 h depending on the practical weather conditions. Supplementary Figs.  34 and S35 recorded the daily weather data and temperature curves of the device during the daytime water collection process for a consecutive week, respectively. The daily water yield and daily average water production rate were 3.5–8.9 L water  m −2  day −1 and 0.54–1.18 L water  m −2  h −1 , respectively. The water production rate of the device was stable under fine weather conditions, demonstrating the reliable stability of the gel sorbent and device (Supplementary Fig.  36 ). We compared the water production performance of this device with other previously reported state-of-the-art continuous SAWH devices based on the daily water yield (Fig.  5D ). The results showed that water production rates in view of the sorbent weight and desorption area of the device were much higher than the previously reported solar-driven continuous SAWH devices without solar concentration (the environmental conditions were shown in Supplementary Table  2 ). Additionally, this device also delivered a higher thermal efficiency than other solar-driven SAWH devices based on solid sorbents, demonstrating the superiority of our material selection, solar-wind coupling driven strategy and device design. We also detected the concentration of possible ions in the collected water by ion chromatography (IC). The results showed that the quality of the collected water met the drinking water standard set by the World Health Organization (Supplementary Fig.  37 ).

To further investigate the application potential of this solar-wind coupling driven continuous SAWH device, we plotted the global annual average daily direct normal solar irradiation distribution and global annual average near-ground wind speed distribution, as shown in Supplementary Fig.  38 . It is noticed that regions such as the vast inland regions of Asia, northern and southern Africa, Australia in Oceania, central North America, and central and southern South America all have abundant solar energy resources and annual average wind speed of above 1 m s −1 , demonstrating promising potential of the solar-wind coupling driven atmospheric water harvesting for freshwater supply worldwide. Additionally, there are abundant wind resources at the edges of various land plates and above the sea surface, which possess ideal geographical conditions for the layout of the large-scale solar-wind coupling driven continuous SAWH device.

We reported a facile and scalable strategy to prepare the hygroscopic interconnected porous gel (HIPG) with fast sorption-desorption kinetics, strong adhesion properties, and reliable water retention ability appropriate for continuous atmospheric water harvesting. The interconnected porous structure with high pore volume and hierarchical pores effectively reduced the water vapor diffusion resistance within the HIPG, thus accelerating the water vapor transport and leading to fast water capture and release properties. Consequently, the HIPG showed ultrahigh water uptake of 6.83 g g −1 under 90% RH and could capture 93.2%, 80.5%, and 76.4% of the equilibrium sorption capacity within 30 min under 25 °C and 30%, 45%, 60% RH and release 87.7% of the equilibrium water uptake within 30 minutes under 1 sun irradiation. We further designed a solar-wind coupling driven continuous SAWH device with enhanced heat and mass transfer design. The efficient solar concentration strategy was proposed to realize synergetic water desorption and condensation enhancement, accelerating the AWH cycle and improving water productivity. The wind energy was introduced as the driving force for the device, and sped up the water sorption kinetics of the sorption bed. As a result, the solar-wind coupling driven continuous SAWH device delivered extraordinary working performance of 4050 mL water  kg sorbent −1  day −1 , 14.9 L water  m −2  day −1 and thermal efficiency as high as 25.7% in indoor experiments (~57% RH) and 3.5–8.9 L water  m −2  day −1 in outdoor experiments by solar concentration without any other energy consumption, superior to the previous SAWH devices. Our work provided a potential approach to realizing a highly efficient and sustainable clean water supply for off-grid and arid regions.

Chemicals and materials

Hydroxypropyl methylcellulose (HPMC, viscosity: 100,000 mPa s, Macklin), sodium polyacrylate (PAAS, average molecular weight: 5,000,000~7,000,000, 80 mesh, ACMEC), lithium chloride (LiCl, 99%, Aladdin), Titanium nitride (TiN) nanoparticles (99.9% metals basis, 20 nm, Macklin).

Synthesis of HIPG

The proportions of chemical composition for HIPG were optimized to realize efficient continuous SAWH (details can be seen in Supplementary Note  6 ). In a typical synthesis, 540 mg TiN powder and 6 g LiCl powder were first dispersed in 50 mL deionized (DI) water by ultrasonic treatment for 0.5 h. Then, 1.5 g HPMC was slowly added into the suspension and the solution was mechanically stirred at 800 rpm to be foamed for 15 min. Afterward, 3 g PAAS was slowly added into the mixed solution and the mixed solution was mechanically stirred at 1500 rpm for 1 h to enable the precursor solution to be well foamed. The as-prepared foam gel was poured into a mold with a dimension of 100 × 50 × 3 mm and was dried at 90 °C for 3 h to obtain the HIPG.

Characterizations

The structures and element distribution of the HIPG were investigated by scanning electron microscopy (SEM; JSM-7800F, JEOL, China). The 3D microstructures of the HIPG were characterized by 3D X-ray microscopy (micro-CT; Xradia 520 Versa, Carl Zeiss, Germany). The distribution of pore size was determined by analyzing the micro-CT images. The interactions between salt and polymeric networks were observed by Fourier transform infrared spectroscopy (FTIR) conducted by the FTIR Spectrometer (Nicolet 6700, Thermo Fisher, America). The chemical composition and the salt state of the HIPG were analyzed by X-ray diffraction (XRD; D8 Advance, Bruker, Germany) with a scanning rate of 5° min −1 . The water sorption isotherms were measured by a surface area and porosity analyzer (ASAP 2020 PLUS HD88, Micromeritics, America). The absorbance of the HIPG in the range of 250 to 2500 nm was measured by a UV–vis–NIR spectrophotometer (Lamda 950, China).

Adhesion tests of HIPG

The HIPG with Φ7×0.5 cm 2 was coated on the discs made of aluminum, stainless steel, silica glass, acrylic, wood, and PTFE, respectively, with a diameter of 8 cm. Then these discs with totally dried HIPG samples were placed into a spin coater and were fixed on the sample disk by vacuuming. The adhesion strength between the dry HIPGs and substrate materials was characterized by being tested at different rotational speeds for one minute. The morphology of the samples before and after rotation was recorded by taking photos. After that, the samples were transferred into the humidity chamber (BINDER, KMF115, Germany) to capture water vapor under 25 °C and 60% RH for one hour. The adsorbed samples were placed in the spin coater again to rotate at different rotation speeds for one minute. The morphology of the samples before and after rotation was recorded by taking photos.

What’s more, the discs made of the above substrate materials coated by dried HIPG samples with a dimension of Φ7 × 0.2 cm 2 were placed into the humidity chamber under 25 °C, 60% RH to adsorb water vapor for 1 h. Afterward, an acrylic plate with a cross-sectional size of 2 × 1 cm was in contact with the HIPGs. The other end of the acrylic plate was attached with a counterweight of 120 g.

Water sorption-desorption tests of HIPG

First, the bulk HIPG samples with a scale-up dimension of 10 × 5 × 0.3 cm 3 were dried in an oven at 90 °C for 4 h. Then the dried samples were transferred into the humidity chamber at a constant temperature of 25 °C and RHs of 30%, 45%, 60%, 75% and 90% for 480 min. Single-sided sorption test involved coating the HIPG on an acrylic substrate and then testing it in the humidity chamber. Quasi-double-sided sorption refers to placing the HIPG samples coating on the dust-free paper on a bracket with hexagonal holes in a regular pattern. The weight changes of the bulk HIPGs were recorded by an analytical balance.

The indoor desorption tests were carried out by using a constant climate chamber with a solar simulator. To measure the water desorption performances of HIPG under various solar intensities, bulk HIPG samples saturated at 25 °C and 60% RH were placed in the constant climate chamber to release water vapor under 25 °C, 60% RH and 1–4 sun irradiation intensities, respectively. The analytical balance was employed to record the mass changes of the samples.

The cycling sorption-desorption tests of HIPG were carried out by the humidity chamber. For the first kind of sorption-desorption cycle, the HIPG samples captured water vapor at 25 °C and 60% RH for 180 min and released water at 90 °C and 3.17 kPa (corresponding to the condensation temperature of 25 °C) for 45 min for thirty cycles. For the second, the HIPG samples captured water vapor at 25 °C and 90% RH for 720 min and released water at 90 °C and 1.90 kPa for 50 min for 30 cycles.

AWH performance tests of the solar-wind coupling driven continuous SAWH device

The indoor water harvesting tests were conducted with a solar simulator. The ambient temperature and RH were controlled by the heating, ventilation, and air conditioning (HVAC) system. Considering there was no wind indoors, an electric motor was connected to the shaft of the upper roller through a conveyor belt, driving the roller and the sorption bed to rotate. Different cycle times were achieved by adjusting the output speed of the electrical motor. The solar concentration was realized through a Fresnel lens. The water production performance of the continuous SAWH device was measured with the same cycle time of 30 min under 1–4 sun irradiations. The temperature of different positions of the device was measured by thermocouples. In order to determine the temperature of the moving HIPG in the desorption region, we inserted a thermocouple into the sorbent at the same position before it entered the desorption region, and then enabled it to pass through the entire desorption region together with the sorbent, consequently obtaining the temperature change of the sorbent in the desorption region over time. The environmental conditions were recorded by a thermo-hygrometer.

For outdoor water harvesting experiments, the natural solar irradiation and natural wind speed were recorded by a meteorological station. The environmental temperature and RH were measured by a thermo-hygrometer. The temperature of different positions of the device was measured by thermocouples. Fan blades were used to convert wind energy into mechanical energy, driving the gearing to operate. To solve the mismatch between the high rotation speed of the fan blades and the slow rotation speed required by the device, we adopted a gear reducer to adjust the rotation speed. The fan blades were fixed on the input shaft of the gear reducer. The output end of the gear reducer is matched with the shaft of the upper roller through a coupling. By tuning the reduction ratio of the gear reducer, the rotation speed of the device could be decreased to near the optimal value. The solar concentration was realized through a Fresnel lens.

Data availability

All the data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary information.  Source data are provided with this paper.

Mekonnen, M. M. & Hoekstra, A. Y. Four billion people facing severe water scarcity. Sci. Adv. 2 , e1500323 (2016).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Kalmutzki, M. J., Diercks, C. S. & Yaghi, O. M. Metal-Organic Frameworks for Water Harvesting from Air. Adv. Mater. 30 , e1704304 (2018).

Article   PubMed   Google Scholar  

Lord, J. et al. Global potential for harvesting drinking water from air using solar energy. Nature 598 , 611–617 (2021).

Chu, C., Ryberg, E. C., Loeb, S. K., Suh, M.-J. & Kim, J.-H. Water disinfection in rural areas demands unconventional solar technologies. Acc. Chem. Res. 52 , 1187–1195 (2019).

Article   CAS   PubMed   Google Scholar  

Jeon, I., Ryberg, E. C., Alvarez, P. J. J. & Kim, J.-H. Technology assessment of solar disinfection for drinking water treatment. Nat. Sustain. 5 , 801–808 (2022).

Article   Google Scholar  

Kim, H. et al. Water harvesting from air with metal-organic frameworks powered by natural sunlight. Science 356 , 430–434 (2017).

Article   ADS   CAS   PubMed   Google Scholar  

Fathieh, F. et al. Practical water production from desert air. Sci. Adv. 4 , eaat3198 (2018).

Hanikel, N. et al. Evolution of water structures in metal-organic frameworks for improved atmospheric water harvesting. Science 374 , 454–459 (2021).

Hanikel, N. et al. MOF linker extension strategy for enhanced atmospheric water harvesting. ACS Cent. Sci. 9 , 551–557 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Matsumoto, K., Sakikawa, N. & Miyata, T. Thermo-responsive gels that absorb moisture and ooze water. Nat. Commun. 9 , 2315 (2018).

Nandakumar, D. K. et al. Solar energy triggered clean water harvesting from humid air existing above sea surface enabled by a hydrogel with ultrahigh hygroscopicity. Adv. Mater. 31 , 1806730 (2019).

Yao, H. et al. Highly efficient clean water production from contaminated air with a wide humidity range. Adv. Mater. 32 , 1905875 (2020).

Article   CAS   Google Scholar  

Qi, H. et al. An interfacial solar-driven atmospheric water generator based on a liquid sorbent with simultaneous adsorption–desorption. Adv. Mater. 31 , 1903378 (2019).

Wang, X. et al. An Interfacial solar heating assisted liquid sorbent atmospheric water generator. Angew. Chem. Int Ed. Engl. 58 , 12054–12058 (2019).

Xu, J. et al. Ultrahigh solar-driven atmospheric water production enabled by scalable rapid-cycling water harvester with vertically aligned nanocomposite sorbent. Energy Environ. Sci. 14 , 5979–5994 (2021).

Zhu, P. et al. 3D Printed cellulose nanofiber aerogel scaffold with hierarchical porous structures for fast solar‐driven atmospheric water harvesting. Adv. Mater. 36 , e2306653 (2023).

Lu, H. et al. Tailoring the desorption behavior of hygroscopic gels for atmospheric water harvesting in arid climates. Adv. Mater. 34 , 2205344 (2022).

Zhao, F. et al. Super moisture-absorbent gels for all-weather atmospheric water harvesting. Adv. Mater. 31 , 1806446 (2019).

Graeber, G. et al. Extreme water uptake of hygroscopic hydrogels through maximized swelling‐induced salt loading. Adv. Mater. 36 , e2211783 (2023).

Guo, Y. et al. Scalable super hygroscopic polymer films for sustainable moisture harvesting in arid environments. Nat. Commun. 13 , 2761 (2022).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Xu, J. et al. Efficient solar-driven water harvesting from arid air with metal–organic frameworks modified by hygroscopic salt. Angew. Chem. Int. Ed. 59 , 5202–5210 (2020).

Garzón-Tovar, L., Pérez-Carvajal, J., Imaz, I. & Maspoch, D. Composite salt in porous metal-organic frameworks for adsorption heat transformation. Adv. Funct. Mater. 27 , 1606424 (2017).

Yang, K. et al. Hollow spherical SiO 2 micro-container encapsulation of LiCl for high-performance simultaneous heat reallocation and seawater desalination. J. Mater. Chem. A 8 , 1887–1895 (2020).

Li, R., Shi, Y., Wu, M., Hong, S. & Wang, P. Improving atmospheric water production yield: enabling multiple water harvesting cycles with nano sorbent. Nano Energy 67 , 104255 (2020).

Shan, H. et al. High-yield solar-driven atmospheric water harvesting with ultra-high salt content composites encapsulated in porous membrane. Cell Rep. Phys. Sci. 2 , 100664 (2021).

Wang, Y. et al. Heterogeneous wettability and radiative cooling for efficient deliquescent sorbents-based atmospheric water harvesting. Cell Rep. Phys. Sci. 3 , 100879 (2022).

Deng, F., Xiang, C., Wang, C. & Wang, R. Sorption-tree with scalable hygroscopic adsorbent-leaves for water harvesting. J. Mater. Chem. A 10 , 6576–6586 (2022).

Deng, F., Wang, C., Xiang, C. & Wang, R. Bioinspired topological design of super hygroscopic complex for cost-effective atmospheric water harvesting. Nano Energy 90 , 106642 (2021).

Aleid, S. et al. Salting-in effect of zwitterionic polymer hydrogel facilitates atmospheric water harvesting. ACS Mater. Lett. 4 , 511–520 (2022).

Lei, C. et al. Polyzwitterionic hydrogels for efficient atmospheric water harvesting. Angew. Chem. Int. Ed. 61 , e202200271 (2022).

Shan, H. et al. All‐day multicyclic atmospheric water harvesting enabled by polyelectrolyte hydrogel with hybrid desorption mode. Adv. Mater. 35 , e2302038 (2023).

Guan, W., Lei, C., Guo, Y., Shi, W. & Yu, G. Hygroscopic-microgels-enabled rapid water extraction from arid air. Adv. Mater . e2207786 (2022).

Wang, J., Hua, L., Li, C. & Wang, R. Atmospheric water harvesting: critical metrics and challenges. Energy Environ. Sci. 15 , 4867–4871 (2022).

Díaz-Marín, C. D. et al. Kinetics of sorption in hygroscopic hydrogels. Nano Lett. 22 , 1100–1107 (2022).

Article   ADS   PubMed   Google Scholar  

Almassad, H. A., Abaza, R. I., Siwwan, L., Al-Maythalony, B. & Cordova, K. E. Environmentally adaptive MOF-based device enables continuous self-optimizing atmospheric water harvesting. Nat. Commun. 13 , 4873 (2022).

Hanikel, N. et al. Rapid cycling and exceptional yield in a metal-organic framework water harvester. ACS Cent. Sci. 5 , 1699–1706 (2019).

Min, X. et al. High-yield atmospheric water harvesting device with integrated heating/cooling enabled by thermally tailored hydrogel sorbent. ACS Energy Lett. 8 , 3147–3153 (2023).

Article   ADS   CAS   Google Scholar  

Wang, W. et al. Air-cooled adsorption-based device for harvesting water from island air. Renew. Sustain. Energy Rev. 141 , 110802 (2021).

Wang, W. et al. Viability of a practical multicyclic sorption-based water harvester with improved water yield. Water Res 211 , 118029 (2022).

Nayak, P. K., Mahesh, S., Snaith, H. J. & Cahen, D. Photovoltaic solar cell technologies: analysing the state of the art. Nat. Rev. Mater. 4 , 269–285 (2019).

Poredoš, P. & Wang, R. Sustainable cooling with water generation. Science 380 , 458–459 (2023).

Poredoš, P., Shan, H., Wang, C., Deng, F. & Wang, R. Sustainable water generation: grand challenges in continuous atmospheric water harvesting. Energy Environ. Sci. 15 , 3223–3235 (2022).

Li, R. & Wang, P. Sorbents, processes and applications beyond water production in sorption-based atmospheric water harvesting. Nat. Water 1 , 573–586 (2023).

Article   ADS   Google Scholar  

Xia, M. et al. Biomimetic hygroscopic fibrous membrane with hierarchically porous structure for rapid atmospheric water harvesting. Adv. Funct. Mater. 33 , 2214813 (2023).

Li, T. et al. Scalable and efficient solar-driven atmospheric water harvesting enabled by bidirectionally aligned and hierarchically structured nanocomposites. Nat. Water 1 , 971–981 (2023).

Xiao, J. et al. S/O-functionalities on modified carbon materials governing adsorption of water vapor. J. Phys. Chem. C. 117 , 23057–23065 (2013).

Guler, U., Shalaev, V. M. & Boltasseva, A. Nanoparticle plasmonics: going practical with transition metal nitrides. Mater. Today 18 , 227–237 (2015).

Li, R. et al. Hybrid hydrogel with high water vapor harvesting capacity for deployable solar-driven atmospheric water generator. Environ. Sci. Technol. 52 , 11367–11377 (2018).

Hou, Y., Sheng, Z., Fu, C., Kong, J. & Zhang, X. Hygroscopic holey graphene aerogel fibers enable highly efficient moisture capture, heat allocation and microwave absorption. Nat. Commun. 13 , 1227 (2022).

Niknahad, M., Moradian, S. & Mirabedini, S. M. The adhesion properties and corrosion performance of differently pretreated epoxy coatings on an aluminium alloy. Corros. Sci. 52 , 1948–1957 (2010).

Ye, X. et al. The interface designing and reinforced features of wood fiber/polypropylene composites: wood fiber adopting nano-zinc-oxide-coating via ion assembly. Compos. Sci. Technol. 124 , 1–9 (2016).

Song, W., Zheng, Z., Alawadhi, A. H. & Yaghi, O. M. MOF water harvester produces water from Death Valley desert air in ambient sunlight. Nat. Water 1 , 626–634 (2023).

LaPotin, A. et al. Dual-stage atmospheric water harvesting device for scalable solar-driven water production. Joule 5 , 166–182 (2021).

Li, T. et al. Simultaneous atmospheric water production and 24-hour power generation enabled by moisture-induced energy harvesting. Nat. Commun. 13 , 6771 (2022).

Download references

Acknowledgements

The authors acknowledge the financial support from the National Natural Science Foundation of China (Grant No. 52106101, R.W.), the Fundamental Research Funds for the Central Universities [Shanghai Jiao Tong University (No. 23X010201008, R.W.)], and the China Postdoctoral Science Foundation (Grant Nos. 2022T150402, and 2021M702100, C.X.).

Author information

Authors and affiliations.

Institute of Refrigeration and Cryogenics, MOE Engineering Research Center of Solar Power and Refrigeration, Shanghai Jiao Tong University, 200240, Shanghai, China

Xinge Yang, Zhihui Chen, Chengjie Xiang, He Shan & Ruzhu Wang

You can also search for this author in PubMed   Google Scholar

Contributions

X.Y.: Conceptualization, investigation, methodology, validation, data curation, formal analysis, software, visualization, writing-original draft. Z.C.: Investigation, validation, visualization, writing—review & editing. C.X.: Methodology, formal analysis, writing—review & editing, funding acquisition. H.S.: Investigation, software, writing—review & editing. R.W.: Conceptualization, methodology, supervision, writing—review & editing, funding acquisition.

Corresponding authors

Correspondence to Chengjie Xiang or Ruzhu Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Lingxiao Li and the other, anonymous, reviewers for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Yang, X., Chen, Z., Xiang, C. et al. Enhanced continuous atmospheric water harvesting with scalable hygroscopic gel driven by natural sunlight and wind. Nat Commun 15 , 7678 (2024). https://doi.org/10.1038/s41467-024-52137-4

Download citation

Received : 20 March 2024

Accepted : 28 August 2024

Published : 03 September 2024

DOI : https://doi.org/10.1038/s41467-024-52137-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

experiment is a research method

IMAGES

  1. The scientific method is a process for experimentation

    experiment is a research method

  2. What is Experimental Research & How is it Significant for Your Business

    experiment is a research method

  3. 3 Types of Scientific Method Experiments

    experiment is a research method

  4. Experimental method

    experiment is a research method

  5. Scientific Method

    experiment is a research method

  6. Scientific Method: Definition and Examples

    experiment is a research method

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. ये जादू नहीं SCIENCE हैं।🧠 #experiment #short

  3. The Oldest Science Experiment #scienceexperiment #science

  4. Field experiment- Research methods -Psychology

  5. Types of Experiment-Research Methods-Psychology

  6. HOW TO WRITE THE METHODOLOGY

COMMENTS

  1. Guide to Experimental Design

    Experimental design is the process of planning an experiment to test a hypothesis. The choices you make affect the validity of your results.

  2. Experimental Method In Psychology

    The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups.

  3. Experimental Research: What it is + Types of designs

    Experimental research is a quantitative research method with a scientific approach. Learn about the various types and their advantages.

  4. How the Experimental Method Works in Psychology

    What Is the Experimental Method in Psychology? The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

  5. Experimental Design

    Experimental Design Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

  6. Experimental research

    Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in ...

  7. 6.1 Experiment Basics

    An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables. Studies are high in internal validity to the extent that the way they are conducted supports the conclusion that the independent variable caused any observed ...

  8. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling cause-and-effect relationships between variables across various disciplines. This paper delineates the key ...

  9. 6.2 Experimental Design

    Learning Objectives Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question. Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it. Define what a ...

  10. A Quick Guide to Experimental Design

    Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question. Frequently asked questions about experimental design What is the definition of an experimental design?

  11. Experimentation in Scientific Research

    Experimentation is one scientific research method, perhaps the most recognizable, in a spectrum of methods that also includes description, comparison, and modeling (see our Description, Comparison, and Modeling modules). While all of these methods share in common a scientific approach, experimentation is unique in that it involves the conscious ...

  12. Research Methods

    Research methods are ways of collecting and analyzing data. Common methods include surveys, experiments, interviews, and observations.

  13. Experimental Research Design

    In the language of research methods, in randomized experiments, the assignment of research subjects to experimental conditions is exogenous. Exogenous in this context means outside or external to everyone involved in the experiment including the research subjects, treatment providers, and researchers - only randomization affects experimental ...

  14. Experimental Research

    Experimental science is the queen of sciences and the goal of all speculation. Roger Bacon (1214-1294) Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term 'experiment' arises from Latin, Experiri, which means, 'to try'.

  15. Steps of the Scientific Method

    The six steps of the scientific method include: 1) asking a question about something you observe, 2) doing background research to learn what is already known about the topic, 3) constructing a hypothesis, 4) experimenting to test the hypothesis, 5) analyzing the data from the experiment and drawing conclusions, and 6) communicating the results ...

  16. Experimental Research

    Experimental research is a systematic and scientific approach to the scientific method where the scientist manipulates variables.

  17. Research Methods In Psychology

    Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  18. 5.1 Experiment Basics

    An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables. An extraneous variable is any variable other than the independent and dependent variables. A confound is an extraneous variable that varies systematically with the ...

  19. Experimental Research Designs: Types, Examples & Advantages

    There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design. 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2.

  20. Experimental Research Designs: Types, Examples & Methods

    Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

  21. Experimental Research: Definition, Types and Examples

    Find out what experimental research is, discover the types of experimental research design and learn about the advantages of this research, along with examples.

  22. What Is a Controlled Experiment?

    In experiments, researchers manipulate independent variables to test their effects on dependent variables. In a controlled experiment, all variables other than the independent variable are controlled or held constant so they don't influence the dependent variable.

  23. Full article: Gamification applied with the SOAR study method for time

    In comparison to our research, time management is also worked on due to its great importance in various components of our daily lives. In this research, healthcare students are used as a sample, analyzing stress reduction. We carried out the experiment with engineering students, focused on the application of the SOAR study method.

  24. Demonstration Experiment of a Communication Service Fault Diagnosis

    However, in our previous research [1], we developed a technology using deep reinforcement learning, which assigns a reward based on the confidence level of class convergence, to explore and execute only the commands necessary for identifying the fault cause. ... Simulation Experiment for Proposed Method Using Quantum Computers.

  25. SuperDEM-CFD Modeling of Fluidization of Spherical and Cylindrical

    Understanding the fluidization behavior of binary mixtures of spherical and nonspherical particles is essential for better designing and optimizing pyrolysis-fluidized bed reactors. This study experimentally investigated the fluidization behaviors of spherical and cylindrical plastic particles and their binary mixtures, and the features studied include pressure drops, particle mixing, and ...

  26. Driver distraction and fatigue detection in images using ME‐YOLOv8

    For the evaluation of their model, the researchers utilized the NTHU-DDD dataset, where the model demonstrated an average accuracy rate of 73.06% in their experiments. Another method used Region of interest (ROI) to conduct sleepy eyes introduced by Chirra et al. This architecture focused on analyzing the eye region. They utilized the Haar ...

  27. Phys. Rev. Research 6, 033248 (2024)

    In this paper, we propose a method for denoising experimental density matrices that combines standard quantum state tomography with an attention-based neural network architecture. The algorithm learns the noise from the data itself, without a priori knowledge of its sources. Firstly, we show how the proposed protocol can improve the averaged fidelity of reconstruction over linear inversion and ...

  28. Enhanced continuous atmospheric water harvesting with scalable ...

    The device demonstrated record high working performance of 14.9 Lwater m−2 day−1 and thermal efficiency of 25.7% in indoor experiments and 3.5-8.9 Lwater m−2 day−1 in outdoor experiments ...