Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
8.2 Quasi-experimental and pre-experimental designs
Learning objectives.
- Identify and describe the various types of quasi-experimental designs
- Distinguish true experimental designs from quasi-experimental and pre-experimental designs
- Identify and describe the various types of quasi-experimental and pre-experimental designs
As we discussed in the previous section, time, funding, and ethics may limit a researcher’s ability to conduct a true experiment. For researchers in the medical sciences and social work, conducting a true experiment could require denying needed treatment to clients, which is a clear ethical violation. Even those whose research may not involve the administration of needed medications or treatments may be limited in their ability to conduct a classic experiment. When true experiments are not possible, researchers often use quasi-experimental designs.
Quasi-experimental designs
Quasi-experimental designs are similar to true experiments, but they lack random assignment to experimental and control groups. Quasi-experimental designs have a comparison group that is similar to a control group except assignment to the comparison group is not determined by random assignment. The most basic of these quasi-experimental designs is the nonequivalent comparison groups design (Rubin & Babbie, 2017). The nonequivalent comparison group design looks a lot like the classic experimental design, except it does not use random assignment. In many cases, these groups may already exist. For example, a researcher might conduct research at two different agency sites, one of which receives the intervention and the other does not. No one was assigned to treatment or comparison groups. Those groupings existed prior to the study. While this method is more convenient for real-world research, it is less likely that that the groups are comparable than if they had been determined by random assignment. Perhaps the treatment group has a characteristic that is unique–for example, higher income or different diagnoses–that make the treatment more effective.
Quasi-experiments are particularly useful in social welfare policy research. Social welfare policy researchers often look for what are termed natural experiments , or situations in which comparable groups are created by differences that already occur in the real world. Natural experiments are a feature of the social world that allows researchers to use the logic of experimental design to investigate the connection between variables. For example, Stratmann and Wille (2016) were interested in the effects of a state healthcare policy called Certificate of Need on the quality of hospitals. They clearly could not randomly assign states to adopt one set of policies or another. Instead, researchers used hospital referral regions, or the areas from which hospitals draw their patients, that spanned across state lines. Because the hospitals were in the same referral region, researchers could be pretty sure that the client characteristics were pretty similar. In this way, they could classify patients in experimental and comparison groups without dictating state policy or telling people where to live.
Matching is another approach in quasi-experimental design for assigning people to experimental and comparison groups. It begins with researchers thinking about what variables are important in their study, particularly demographic variables or attributes that might impact their dependent variable. Individual matching involves pairing participants with similar attributes. Then, the matched pair is split—with one participant going to the experimental group and the other to the comparison group. An ex post facto control group , in contrast, is when a researcher matches individuals after the intervention is administered to some participants. Finally, researchers may engage in aggregate matching , in which the comparison group is determined to be similar on important variables.
Time series design
There are many different quasi-experimental designs in addition to the nonequivalent comparison group design described earlier. Describing all of them is beyond the scope of this textbook, but one more design is worth mentioning. The time series design uses multiple observations before and after an intervention. In some cases, experimental and comparison groups are used. In other cases where that is not feasible, a single experimental group is used. By using multiple observations before and after the intervention, the researcher can better understand the true value of the dependent variable in each participant before the intervention starts. Additionally, multiple observations afterwards allow the researcher to see whether the intervention had lasting effects on participants. Time series designs are similar to single-subjects designs, which we will discuss in Chapter 15.
Pre-experimental design
When true experiments and quasi-experiments are not possible, researchers may turn to a pre-experimental design (Campbell & Stanley, 1963). Pre-experimental designs are called such because they often happen as a pre-cursor to conducting a true experiment. Researchers want to see if their interventions will have some effect on a small group of people before they seek funding and dedicate time to conduct a true experiment. Pre-experimental designs, thus, are usually conducted as a first step towards establishing the evidence for or against an intervention. However, this type of design comes with some unique disadvantages, which we’ll describe below.
A commonly used type of pre-experiment is the one-group pretest post-test design . In this design, pre- and posttests are both administered, but there is no comparison group to which to compare the experimental group. Researchers may be able to make the claim that participants receiving the treatment experienced a change in the dependent variable, but they cannot begin to claim that the change was the result of the treatment without a comparison group. Imagine if the students in your research class completed a questionnaire about their level of stress at the beginning of the semester. Then your professor taught you mindfulness techniques throughout the semester. At the end of the semester, she administers the stress survey again. What if levels of stress went up? Could she conclude that the mindfulness techniques caused stress? Not without a comparison group! If there was a comparison group, she would be able to recognize that all students experienced higher stress at the end of the semester than the beginning of the semester, not just the students in her research class.
In cases where the administration of a pretest is cost prohibitive or otherwise not possible, a one- shot case study design might be used. In this instance, no pretest is administered, nor is a comparison group present. If we wished to measure the impact of a natural disaster, such as Hurricane Katrina for example, we might conduct a pre-experiment by identifying a community that was hit by the hurricane and then measuring the levels of stress in the community. Researchers using this design must be extremely cautious about making claims regarding the effect of the treatment or stimulus. They have no idea what the levels of stress in the community were before the hurricane hit nor can they compare the stress levels to a community that was not affected by the hurricane. Nonetheless, this design can be useful for exploratory studies aimed at testing a measures or the feasibility of further study.
In our example of the study of the impact of Hurricane Katrina, a researcher might choose to examine the effects of the hurricane by identifying a group from a community that experienced the hurricane and a comparison group from a similar community that had not been hit by the hurricane. This study design, called a static group comparison , has the advantage of including a comparison group that did not experience the stimulus (in this case, the hurricane). Unfortunately, the design only uses for post-tests, so it is not possible to know if the groups were comparable before the stimulus or intervention. As you might have guessed from our example, static group comparisons are useful in cases where a researcher cannot control or predict whether, when, or how the stimulus is administered, as in the case of natural disasters.
As implied by the preceding examples where we considered studying the impact of Hurricane Katrina, experiments, quasi-experiments, and pre-experiments do not necessarily need to take place in the controlled setting of a lab. In fact, many applied researchers rely on experiments to assess the impact and effectiveness of various programs and policies. You might recall our discussion of arresting perpetrators of domestic violence in Chapter 2, which is an excellent example of an applied experiment. Researchers did not subject participants to conditions in a lab setting; instead, they applied their stimulus (in this case, arrest) to some subjects in the field and they also had a control group in the field that did not receive the stimulus (and therefore were not arrested).
Key Takeaways
- Quasi-experimental designs do not use random assignment.
- Comparison groups are used in quasi-experiments.
- Matching is a way of improving the comparability of experimental and comparison groups.
- Quasi-experimental designs and pre-experimental designs are often used when experimental designs are impractical.
- Quasi-experimental and pre-experimental designs may be easier to carry out, but they lack the rigor of true experiments.
- Aggregate matching – when the comparison group is determined to be similar to the experimental group along important variables
- Comparison group – a group in quasi-experimental design that does not receive the experimental treatment; it is similar to a control group except assignment to the comparison group is not determined by random assignment
- Ex post facto control group – a control group created when a researcher matches individuals after the intervention is administered
- Individual matching – pairing participants with similar attributes for the purpose of assignment to groups
- Natural experiments – situations in which comparable groups are created by differences that already occur in the real world
- Nonequivalent comparison group design – a quasi-experimental design similar to a classic experimental design but without random assignment
- One-group pretest post-test design – a pre-experimental design that applies an intervention to one group but also includes a pretest
- One-shot case study – a pre-experimental design that applies an intervention to only one group without a pretest
- Pre-experimental designs – a variation of experimental design that lacks the rigor of experiments and is often used before a true experiment is conducted
- Quasi-experimental design – designs lack random assignment to experimental and control groups
- Static group design – uses an experimental group and a comparison group, without random assignment and pretesting
- Time series design – a quasi-experimental design that uses multiple observations before and after an intervention
Image attributions
cat and kitten matching avocado costumes on the couch looking at the camera by Your Best Digs CC-BY-2.0
Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
Child Care and Early Education Research Connections
Pre-experimental designs.
Pre-experiments are the simplest form of research design. In a pre-experiment either a single group or multiple groups are observed subsequent to some agent or treatment presumed to cause change.
Types of Pre-Experimental Design
One-shot case study design, one-group pretest-posttest design, static-group comparison.
A single group is studied at a single point in time after some treatment that is presumed to have caused change. The carefully studied single instance is compared to general expectations of what the case would have looked like had the treatment not occurred and to other events casually observed. No control or comparison group is employed.
A single case is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed.
A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be a result of the treatment.
Validity of Results
An important drawback of pre-experimental designs is that they are subject to numerous threats to their validity . Consequently, it is often difficult or impossible to dismiss rival hypotheses or explanations. Therefore, researchers must exercise extreme caution in interpreting and generalizing the results from pre-experimental studies.
One reason that it is often difficult to assess the validity of studies that employ a pre-experimental design is that they often do not include any control or comparison group. Without something to compare it to, it is difficult to assess the significance of an observed change in the case. The change could be the result of historical changes unrelated to the treatment, the maturation of the subject, or an artifact of the testing.
Even when pre-experimental designs identify a comparison group, it is still difficult to dismiss rival hypotheses for the observed change. This is because there is no formal way to determine whether the two groups would have been the same if it had not been for the treatment. If the treatment group and the comparison group differ after the treatment, this might be a reflection of differences in the initial recruitment to the groups or differential mortality in the experiment.
Advantages and Disadvantages
As exploratory approaches, pre-experiments can be a cost-effective way to discern whether a potential explanation is worthy of further investigation.
Disadvantages
Pre-experiments offer few advantages since it is often difficult or impossible to rule out alternative explanations. The nearly insurmountable threats to their validity are clearly the most important disadvantage of pre-experimental research designs.
Pre-Experimental Design
Pre-experimental design refers to the simplest form of research design often used in the field of psychology, sociology, education, and other social sciences. These designs are called “pre-experimental” because they precede true experimental design in terms of complexity and rigor.
In pre-experimental designs, researchers observe or measure subjects without manipulating variables or controlling conditions. Often, these designs lack certain elements of a true experiment, such as random assignment, control groups, or pretest measurements, making it difficult to determine causality.
Three common types of pre-experimental designs include the one-shot case study, the one-group pretest-posttest design, and the static-group comparison. These designs offer a starting point for researchers but are typically seen as less reliable than more controlled experimental designs due to the lack of randomization and the potential for confounding variables.
Characteristics of Pre-Experimental Design
Pre-experimental designs are characterized by their simplicity and ease of execution. They are typically used when resources are limited, or when the research question does not require a high degree of control or precision. Key characteristics of these designs include the use of a single group, the lack of a control group, and the absence of random assignment.
Single Group
In a pre-experimental design, there is typically only one group of subjects, and this group is measured or observed both before and after an intervention or treatment.
Lack of Control Group
Pre-experimental designs often lack a control group for comparison. As a result, it’s difficult to determine whether observed changes are the result of the intervention or due to extraneous factors.
Absence of Random Assignment
Another characteristic of pre-experimental design is the absence of random assignment. Subjects are not randomly assigned to groups, which can lead to selection bias and limits the generalizability of the findings.
There are several types of pre-experimental designs, including the one-shot case study, the one-group pretest-posttest design, and the static-group comparison.
One-Shot Case Study
In a one-shot case study, a single group or case is studied at a single point in time after some intervention or treatment that is presumed to cause change.
One-Group Pretest-Posttest Design
In the one-group pretest-posttest design, a single group is observed at two time points, one before the treatment and one after the treatment.
Static-Group Comparison
In a static-group comparison, there are two groups that are not created through random assignment. One group receives the treatment and the other does not, and the outcomes are compared.
Limitations
While pre-experimental designs offer advantages in terms of simplicity and convenience, they also come with notable limitations. The lack of a control group and the absence of random assignment limits the ability to establish causality. There is also a risk of selection bias, and the findings may not be generalizable to other populations or settings.
Despite these limitations, pre-experimental designs can serve as valuable starting points in exploratory research, laying the groundwork for more rigorous experimental designs in the future.
In conclusion, pre-experimental design, while limited in its ability to provide strong evidence of causality, plays a crucial role in exploratory research. It presents a simplified and cost-effective approach to experimentation that is especially useful when resources are limited or when the goal is to explore a new area of study. However, the inherent limitations of pre-experimental designs necessitate caution in interpreting their results. Consequently, they are often used as stepping stones towards more rigorous research designs. As such, understanding pre-experimental designs is a fundamental part of the researcher’s toolkit, paving the way for more comprehensive and controlled investigations.
- Voxco Online
- Voxco Panel Management
- Voxco Panel Portal
- Voxco Audience
- Voxco Mobile Offline
- Voxco Dialer Cloud
- Voxco Dialer On-premise
- Voxco TCPA Connect
- Voxco Analytics
- Voxco Text & Sentiment Analysis
- 40+ question types
- Drag-and-drop interface
- Skip logic and branching
- Multi-lingual survey
- Text piping
- Question library
- CSS customization
- White-label surveys
- Customizable ‘Thank You’ page
- Customizable survey theme
- Reminder send-outs
- Survey rewards
- Social media
- Website surveys
- Correlation analysis
- Cross-tabulation analysis
- Trend analysis
- Real-time dashboard
- Customizable report
- Email address validation
- Recaptcha validation
- SSL security
Take a peek at our powerful survey features to design surveys that scale discoveries.
Download feature sheet.
- Hospitality
- Academic Research
- Customer Experience
- Employee Experience
- Product Experience
- Market Research
- Social Research
- Data Analysis
Explore Voxco
Need to map Voxco’s features & offerings? We can help!
Watch a Demo
Download Brochures
Get a Quote
- NPS Calculator
- CES Calculator
- A/B Testing Calculator
- Margin of Error Calculator
- Sample Size Calculator
- CX Strategy & Management Hub
- Market Research Hub
- Patient Experience Hub
- Employee Experience Hub
- NPS Knowledge Hub
- Market Research Guide
- Customer Experience Guide
- Survey Research Guides
- Survey Template Library
- Webinars and Events
- Feature Sheets
- Try a sample survey
- Professional Services
Get exclusive insights into research trends and best practices from top experts! Access Voxco’s ‘State of Research Report 2024 edition’ .
We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.
VP Innovation & Strategic Partnerships, The Logit Group
- Client Stories
- Voxco Reviews
- Why Voxco Research?
- Careers at Voxco
- Vulnerabilities and Ethical Hacking
Explore Regional Offices
- Survey Software The world’s leading omnichannel survey software
- Online Survey Tools Create sophisticated surveys with ease.
- Mobile Offline Conduct efficient field surveys.
- Text Analysis
- Close The Loop
- Automated Translations
- NPS Dashboard
- CATI Manage high volume phone surveys efficiently
- Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
- IVR Survey Software Boost productivity with automated call workflows.
- Analytics Analyze survey data with visual dashboards
- Panel Manager Nurture a loyal community of respondents.
- Survey Portal Best-in-class user friendly survey portal.
- Voxco Audience Conduct targeted sample research in hours.
- Predictive Analytics
- Customer 360
- Customer Loyalty
- Fraud & Risk Management
- AI/ML Enablement Services
- Credit Underwriting
Find the best survey software for you! (Along with a checklist to compare platforms)
Get Buyer’s Guide
- 100+ question types
- SMS surveys
- Financial Services
- Banking & Financial Services
- Retail Solution
- Risk Management
- Customer Lifecycle Solutions
- Net Promoter Score
- Customer Behaviour Analytics
- Customer Segmentation
- Data Unification
Explore Voxco
Watch a Demo
Download Brochures
- CX Strategy & Management Hub
- The Voxco Guide to Customer Experience
- Professional services
- Blogs & White papers
- Case Studies
Find the best customer experience platform
Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.
Get the Guide Now
VP Innovation & Strategic Partnerships, The Logit Group
- Why Voxco Intelligence?
- Our clients
- Client stories
- Featuresheets
Pre-experimental Design: Definition, Types & Examples
- October 1, 2021
SHARE THE ARTICLE ON
Experimental research is conducted to analyze and understand the effect of a program or a treatment. There are three types of experimental research designs – pre-experimental designs, true experimental designs, and quasi-experimental designs .
In this blog, we will be talking about pre-experimental designs. Let’s first explain pre-experimental research.
What is Pre-experimental Research?
As the name suggests, pre- experimental research happens even before the true experiment starts. This is done to determine the researchers’ intervention on a group of people. This will help them tell if the investment of cost and time for conducting a true experiment is worth a while. Hence, pre-experimental research is a preliminary step to justify the presence of the researcher’s intervention.
The pre-experimental approach helps give some sort of guarantee that the experiment can be a full-scale successful study.
What is Pre-experimental Design?
The pre-experimental design includes one or more than one experimental groups to be observed against certain treatments. It is the simplest form of research design that follows the basic steps in experiments.
The pre-experimental design does not have a comparison group. This means that while a researcher can claim that participants who received certain treatment have experienced a change, they cannot conclude that the change was caused by the treatment itself.
The research design can still be useful for exploratory research to test the feasibility for further study.
Let us understand how pre-experimental design is different from the true and quasi-experiments:
The above table tells us pretty much about the working of the pre-experimental designs. So we can say that it is actually to test treatment, and check whether it has the potential to cause a change or not. For the same reasons, it is advised to perform pre-experiments to define the potential of a true experiment.
See Voxco survey software in action with a Free demo.
Types of Pre-experimental Designs
Assuming now you have a better understanding of what the whole pre-experimental design concept is, it is time to move forward and look at its types and their working:
One-shot case study design
- This design practices the treatment of a single group.
- It only takes a single measurement after the experiment.
- A one-shot case study design only analyses post-test results.
The one-shot case study compares the post-test results to the expected results. It makes clear what the result is and how the case would have looked if the treatment wasn’t done.
A team leader wants to implement a new soft skills program in the firm. The employees can be measured at the end of the first month to see the improvement in their soft skills. The team leader will know the impact of the program on the employees.
One-group pretest-posttest design
- Like the previous one, this design also works on just one experimental group.
- But this one takes two measures into account.
- A pre-test and a post-test are conducted.
As the name suggests, it includes one group and conducts pre-test and post-test on it. The pre-test will tell how the group was before they were put under treatment. Whereas post-test determines the changes in the group after the treatment.
This sounds like a true experiment , but being a pre-experiment design, it does not have any control group.
Following the previous example, the team leader here will conduct two tests. One before the soft skill program implementation to know the level of employees before they were put through the training. And a post-test to know their status after the training.
Now that he has a frame of reference, he knows exactly how the program helped the employees.
Static-group comparison
- This compares two experimental groups.
- One group is exposed to the treatment.
- The other group is not exposed to the treatment.
- The difference between the two groups is the result of the experiment.
As the name suggests, it has two groups, which means it involves a control group too.
In static-group comparison design, the two groups are observed as one goes through the treatment while the other does not. They are then compared to each other to determine the outcome of the treatment.
The team lead decides one group of employees to get the soft skills training while the other group remains as a control group and is not exposed to any program. He then compares both the groups and finds out the treatment group has evolved in their soft skills more than the control group.
Due to such working, static-group comparison design is generally perceived as a quasi-experimental design too.
Characteristics of Pre-experimental Designs
In this section, let us point down the characteristics of pre-experimental design:
- Generally uses only one group for treatment which makes observation simple and easy.
- Validates the experiment in the preliminary phase itself.
- Pre-experimental design tells the researchers how their intervention will affect the whole study.
- As they are conducted in the beginning, pre-experimental designs give evidence for or against their intervention.
- It does not involve the randomization of the participants.
- It generally does not involve the control group, but in some cases where there is a need for studying the control group against the treatment group, static-group comparison comes into the picture.
- The pre-experimental design gives an idea about how the treatment is going to work in case of actual true experiments.
Validity of results in Pre-experimental Designs
Validity means a level to which data or results reflect the accuracy of reality. And in the case of pre-experimental research design, it is a tough catch. The reason being testing a hypothesis or dissolving a problem can be quite a difficult task, let’s say close to impossible. This being said, researchers find it challenging to generalize the results they got from the pre-experimental design, over the actual experiment.
As pre-experimental design generally does not have any comparison groups to compete for the results with, that makes it pretty obvious for the researchers to go through the trouble of believing its results. Without comparison, it is hard to tell how significant or valid the result is. Because there is a chance that the result occurred due to some uncalled changes in the treatment, maturation of the group, or is it just sheer chance.
Let’s say all the above parameters work just in favor of your experiment, you even have a control group to compare it with, but that still leaves us with one problem. And that is what “kind” of groups we get for the true experiments. It is possible that the subjects in your pre-experimental design were a lot different from the subjects you have for the true experiment. If this is the case, even if your treatment is constant, there is still going to be a change in your results.
Advantages of Pre-experimental Designs
- Cost-effective due to its easy process.
- Very simple to conduct.
- Efficient to conduct in the natural environment.
- It is also suitable for beginners.
- Involves less human intervention.
- Determines how your treatment is going to affect the true experiment.
Disadvantages of Pre-experimental Designs
- It is a weak design to determine causal relationships between variables.
- Does not have any control over the research.
- Possess a high threat to internal validity.
- Researchers find it tough to examine the results’ integrity.
- The absence of a control group makes the results less reliable.
This sums up the basics of pre-experimental design and how it differs from other experimental research designs. Curious to learn how you can use survey software to conduct your experimental research, book a meeting with us .
Pre-experimental design is a research method that happens before the true experiment and determines how the researcher’s intervention will affect the experiment.
An example of a pre-experimental design would be a gym trainer implementing a new training schedule for a trainee.
Characteristics of pre-experimental design include its ability to determine the significance of treatment even before the true experiment is performed.
Researchers want to know how their intervention is going to affect the experiment. So even before the true experiment starts, they carry out a pre-experimental research design to determine the possible results of the true experiment.
The pre-experimental design deals with the treatment’s effect on the experiment and is carried out even before the true experiment takes place. While a true experiment is an actual experiment, it is important to conduct its pre-experiment first to see how the intervention is going to affect the experiment.
The true experimental design carries out the pre-test and post-test on both the treatment group as well as a control group. whereas in pre-experimental design, control group and pre-test are options. it does not always have the presence of those two and helps the researcher determine how the real experiment is going to happen.
The main difference between a pre-experimental design and a quasi-experimental design is that pre-experimental design does not use control groups and quasi-experimental design does. Quasi always makes use of the pre-test post-test model of result comparison while pre-experimental design mostly doesn’t.
Non-experimental research methods majorly fall into three categories namely: Cross-sectional research, correlational research and observational research.
Explore Voxco Survey Software
+ Omnichannel Survey Software
+ Online Survey Software
+ CATI Survey Software
+ IVR Survey Software
+ Market Research Tool
+ Customer Experience Tool
+ Product Experience Software
+ Enterprise Survey Software
Secondary Research: Definition, Methods & Examples
Secondary Research: Definition, Methods & Examples The ultimate secondary research guide Upgrade your secondary research with agile market research guide. Download Now SHARE THE ARTICLE
Face-to-Face Surveys
Face-to-Face Surveys: A Quick Guide Retain a hint of personal touch in the digital era with face-to-face surveys. Book a Demo What are Face-to-Face Surveys?
Bar graphs/charts: a guide towards making bar graphs using Excel
Bar graphs/charts: a guide towards making bar graphs using Excel SHARE THE ARTICLE ON Share on facebook Share on twitter Share on linkedin Table of
How Understanding Unhappy Members Makes Healthcare Better for Everyone
How Understanding Unhappy Members Makes Healthcare Better for Everyone Try a free Voxco Online sample survey! Unlock your Sample Survey LISTEN TO THE ARTICLE Voxco
Voxco Receives Growth Equity Capital from Private Investment Firm
Voxco Receives Growth Equity Capital from Private Investment Firm SHARE THE ARTICLE ON Share on facebook Share on twitter Share on linkedin Table of Contents
Role of metadata management
METADATA MANAGEMENT SHARE THE ARTICLE ON Table of Contents Metadata management is the proactive utilization of metadata in an organization to control data in order
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Module 3 Chapter 2: Quantitative Design Strategies
Previously, you reviewed different approaches to intervention and evaluation research and learned about a level of evidence (evidence hierarchy) model. These approaches and frameworks relate to how studies are designed to answer different types of research questions. The design strategies differ in the degree to which they address internal validity concerns—the ability to conclude that changes or differences can be attributed to an intervention rather than to other causes. Designs for quantitative intervention research are the focus of this chapter.
In this chapter, you learn about:
- pre-experimental, quasi-experimental, and experimental designs in relation to strength of evidence and internal validity concerns;
- how quantitative and single-system study designs are diagrammed;
- examples where different designs were used in social work intervention or evaluation research.
Addressing Internal Validity Concerns via Study Design Strategies
The study designs we examine in this chapter differ in terms of their capacity to address specific types of internal validity concerns. As a reminder of what you learned in our previous course, improving internal validity is about increasing investigator confidence that the outcomes observed in an experimental study are as believed, and are due to the experimental variables being studied. In the case of intervention research, the experimental variable being studied is the intervention.
Three general internal validity challenges are important to consider addressing with intervention research.
- Were there actually changes that occurred with the intervention (comparing participants’ status pre-intervention to their post-intervention status)?
- Did observed changes persist over time (comparing participants’ post-intervention status to their status at some later post-intervention follow-up time)?
- Is the observed change most likely due to the intervention itself?
Let’s consider how these three questions might relate to the kind of study design choices that investigators make.
Types of Intervention Research Designs
The intervention study design options can be loosely categorized into three types: pre-experimental, quasi-experimental, and experimental. Pre-experimental designs do not include comparison (control) groups, only the one group of interest. Quasi-experimental designs do include comparison (control) groups, but individuals are not randomly assigned to these groups—the groups are created naturally. Experimental designs include random assignment of study participants to the groups being compared in the study. These three types of designs differ in terms of their attempts to control as many alternative explanation factors as possible—the extent to which they address internal validity concerns.
Introducing short-hand for intervention study design diagrams .
It can become quite cumbersome to describe all the elements of an intervention study’s design in words and complete sentences; sometimes it is easier to use a diagram instead. This type of short-hand quickly communicates important information about a study design. It can become somewhat confusing, like understanding how the diagram of a football play drawn on a chalkboard translates into reality.
The first thing to know about this intervention study design short-hand is the symbols used and what each symbol means.
- X is used to designate that an intervention is being administered.
- [X] designates that an “intervention” or event naturally occurred, rather than one imposed by the investigator (for example, a change in policy, a natural disaster, or trauma-inducing crisis event).
- O is used to designate that an observation is being made (data are collected).
- Subscript numbers are used to designate which intervention or observation is relevant at that point. For example, X 1 might refer to the new intervention that is being tested and X 2 might refer to the second condition where the usual intervention is delivered. And, O 1 might refer to the first observation period (maybe before the intervention), O 2 to the second observation period (maybe after the intervention), and O 3 to a third observation period (maybe after a longer-term follow-up period).
- R is used to designate that individual elements were randomly assigned to the different intervention conditions. (Important reminder: This random assignment is not about random selection of a sample to represent a population; it is about using a randomization strategy for placing participants into the different groups in the experimental design).
Costs & Benefits of Various Design Strategies.
Before we get into discussing the specific strategies that might be adopted for intervention research, it is important to understand that every design has its advantages and its disadvantages. There is no such thing as a single, perfect design to which all intervention research should adhere. Investigators are faced with a set of choices that must be carefully weighed. As we explore their available options, one feature that will become apparent is that some designs are more “costly” to implement than others. This “cost” term is being used broadly here: it is not simply a matter of dollars, though that is an important, practical consideration.
- First, a design may “cost” more in terms of more data collection points. There are significant costs associated with each time investigators have to collect data from participants—time, space, effort, materials, reimbursement or incentive payments, data entry, and more. For this reason, longitudinal studies often are more costly than cross-sectional studies.
- Second, “costs” increase with higher numbers of study participants. Greater study numbers “cost” dollars (e.g., advertising expenses, reimbursement or incentive payments, cost of materials used in the study, data entry), but also “cost” more in terms of a greater commitment in time and effort from study staff members, and in terms of greater numbers of persons being exposed to the potential risks associated with the intervention being studied.
- Third, some longitudinal study designs “cost” more in terms of the potential for higher rates of participant drop-out from the study over time. Each person who quits a study before it is completed increases the amount of wasted resources, since their data are incomplete (and possibly unusable), and that person may need to be replaced at duplicate cost.
Ten Typical Evaluation/Intervention Study Designs
This section presents 10 general study designs that typically appear in intervention and evaluation research. These examples are presented in a general order of increasing ability to address internal validity concerns, but this is offset by increasing costs in resources, participant numbers, and participant burden to implement. Many variations on these general designs appear in the literature; these are 10 general strategies.
#1: Case study.
Original, novel, or new interventions are sometimes delivered under unusual or unique circumstances. In these instances, when very little is known about intervening under those circumstances, knowledge is extended by sharing a case study . Eventually, when several case studies can be reviewed together, a clearer picture might emerge about intervening around that condition. At that point, theories and interventions can more systematically be tested. Case studies are considered pre-experimental designs.
An example happened with an adolescent named Jeanna Giese at Milwaukee’s Children’s Hospital of Wisconsin. In 2004, Jeanna was bitten by a bat and three weeks later was diagnosed with full-blown rabies when it was too late to administer a vaccine. At the time, no treatments were known to be successful for rabies once it has developed; the rabies vaccine only works before the disease symptoms develop. Until this case, full-blown rabies was reported to be 100% fatal. The hospital staff implemented an innovative, theory and evidence-informed treatment plan which became known as the “Milwaukee Protocol.” The case study design can be diagrammed the following way, where X represents the “Milwaukee Protocol” intervention and O represents the observed outcomes of the treatment delivered in this case.
Jeanna Giese became the first person in the world known to survive full-blown rabies. The intervention team published the case study, and a handful of individuals around the world have been successfully treated with this protocol—rabies continues to be a highly fatal, global concern. The Milwaukee Protocol is considered somewhat controversial, largely because so few others have survived full-blown rabies even with this intervention being administered; some authors argue that this single case was successful because of unique characteristics of the patient, not because of the intervention protocol’s characteristics (Jackson, 2013).
This argument reflects a major drawback of case studies, which are pre-experimental designs: the unique sample/small sample size means that individual characteristics or differences can have a powerful influence on the outcomes observed. Thus, the internal validity problem is that the outcomes might be explained by some factors other than the intervention being studied. In addition, with a very small sample size the study results cannot be generalized to the larger population of individuals experiencing the targeted problem—this is an external validity argument. (We call this an “N of 1” study, where the symbol N refers to the sample size.) The important message here is that case studies are the beginning of a knowledge building trajectory, they are not the end; they inform future research and, possibly, inform practice under circumstances where uncertainty is high with very new problems or solutions. And, just in case you are curious: although some permanent neurological consequences remained, Jeanna Giese completed a college education, was married in 2014, and in 2016 became the mother of twins.
#2: Post-intervention only.
Looking a great deal like the case study design is a simple pre-experimental design where the number of individuals providing data after an intervention is greater than the single or very small number in the case study design. For example, a social work continuing education training session (the intervention) might collect data from training participants at the end of the session to see what they learned during the training event. The trainers might ask participants to rate how much they learned about each topic covered in the training (nothing, a little, some, a lot, very much) or they might present participants with a quiz to test their post-training knowledge of content taught in the session. The post-only design diagram is the same as what we saw with the single case study; the only difference is that the sample size is greater—it includes everyone who completed the evaluation form at the end of the training session rather than just a single case.
The post-intervention only design is cross-sectional in nature (only one data collection point with each participant). This design strategy is extremely vulnerable to internal validity threats. The investigator does not know if the group’s knowledge changed compared to before the training session: participants quizzed on knowledge may already have known the material before the training; or, a perception of how much they learned may not accurately depict how much they learned. The study design does not inform the investigators if the participants’ learning persisted over time after completing the training. The investigators also do not have a high level of confidence that the training session was the most likely cause of any changes observed—they cannot rule out other possible explanations.
In response to the internal validity threat concerning ability to detect change with an intervention, an investigator might ask study participants to compare themselves before and after the intervention took place. That would still be a simple post- only design because there is only one time point for data collection: post-intervention. This kind of retrospective approach is vulnerable to bias because it relies on an individual’s ability to accurately recall the past and make a valid comparison to the present, a comparison that hopefully is not influenced by their present state-of-mind. It helps to remember what you learned from SWK 3401 about the unreliability of the information individuals remember and how memories become influenced by later information and experiences.
#3: Pre-/Post- Intervention Comparison.
A wiser choice in terms of internal validity would be to directly compare data collected at the two points in time: pre-intervention and post-intervention. This pre-/post-design remains a pre-experimental design because it lacks a comparison (control) group. Because the data are collected from the same individuals at two time points, this strategy is considered longitudinal in nature. This type of pre-/post-intervention design allows us to directly identify change where observed differences on a specific outcome variable might be attributed to the intervention. A simple pre-/post- study design could be diagrammed like this:
Here we see an intervention X (perhaps our social work in-service training example), where data were still collected after the intervention (perhaps a knowledge and skills quiz). However, the investigators also collected the same information prior to the intervention. This allowed them to compare data for the two observation periods, pre- and post- intervention. See the arrow added to the diagram that shows this comparison:
While this pre-/post- intervention design is stronger than a post-only design, it also is a bit more “costly” to implement since there is an added data collection point. It imposes an additional burden on participants, and in some situations, it simply might not be possible to collect that pre-intervention data. This design strategy still suffers from the other two internal validity concerns: we do not know if any observed changes persisted over time, and we do not have the highest level of confidence that the changes observed can be attributed to the intervention itself.
#4: Pre-/Post-/Follow-Up Comparison.
Investigators can improve on the pre-/post- study design by adding a follow-up observation. This allows them to determine whether any changes observed between the pre- and post- conditions persisted or disappeared over time. While investigators may be delighted to observe a meaningful change between the pre- and post- intervention periods, if these changes do not last over time, then intervention efforts and resources may have been wasted. This pre-/post-/follow-up design remains in the pre-experimental category, and would be diagrammed like this:
Here we see the intervention (X), with pre-intervention data, post-intervention data, and follow-up data being collected (O 1 , O 2 , and O 3 ). As an example, Wade-Mdivanian, Anderson-Butcher, Newman, and Ruderman (2016) explored the impact of delivering a preventive intervention about alcohol, tobacco and other drugs in the context of a positive youth development program called Youth to Youth International. The outcome variables of interest were the youth leaders’ knowledge, attitudes, self-efficacy, and leadership before the program, at the program’s conclusion, and six months after completing the program. The authors concluded that positive changes in knowledge and self-efficacy outcomes were observed in the pre-/post- comparison, though no significant change was observed in attitudes about alcohol, tobacco and other drugs; these differences persisted at the six-month follow-up. See the pre-/post-/follow-up design diagrammed with comparison arrows added:
This design resolved two of the three internal validity challenges, although at an increased cost of time, effort, and possibly other resources with the added data point. But an investigator would still lack confidence that observed changes were due to the intervention itself. Let’s look at some design strategies that focus on that particular challenge.
#5: Comparison Groups.
In our prior course we learned how to compare groups that differed on some characteristic, like gender for example. Comparison groups in intervention research allow us to compare groups where the difference lies in which intervention condition each received. By providing an experimental intervention to one group and not to the other group, investigator confidence increases about observed changes being related to the intervention. You may have heard this second group described as a control group . They are introduced into the study design to provide a benchmark for comparison with the status of the intervention group. The simplest form of a comparison group design, which is a quasi-experimental type of design, a post-only group design can be diagrammed as follows:
Consider the possibility in our earlier example of evaluating a social work training intervention that the team decided to expand their post only design to include collecting data from a group of social workers who are going to get their training next month. On the same day that the data were collected from the trained group, the team collected data from the untrained social workers, as well. This situation is a post-only design where the top row shows the group who received the training intervention (X) and the outcome was measured (O), and the bottom row shows the group without the training intervention (no X was applied) also being measured at the same point in time as the intervention group (O). This remains a cross-sectional study because each individual was only observed once. The following diagram shows the arrow where investigators compared the two groups on the outcome variables (knowledge and skills quiz scores, using our training example). If the training intervention is responsible for the outcome, the team would see a significant difference in the outcome data when comparing the two groups; hopefully in the direction of the trained group having better scores.
While this design has helped boost investigator confidence that the outcomes observed with the intervention group are likely due to the intervention itself, this post-only design “costs” more than a post-only single group (pre-experimental) study design. This post-only group design still suffers from a significant concern: how would an investigator know if the differences between the two groups appeared only after the intervention or could the differences always have existed, with or without the intervention? With this design, that possibility cannot be ignored. An investigator can only hope that the two groups were equivalent prior to the intervention. Two internal validity questions remain unanswered: did the outcome scores actually demonstrate a change resulting from intervention, and did any observed changes persist over time. Let’s consider some other combination strategies that might help, even though they may “cost” considerably more to implement.
#6: Comparison Group Pre-/Post- Design .
A giant leap forward in managing internal validity concerns comes from combining the strategy of having both an intervention and a comparison group (which makes it quasi-experimental) with the strategy of collecting data both before and after the intervention (which makes it longitudinal). Now investigators are able to address more of the major validity concerns. This comparison group pre-/post- design is diagrammed as follows:
What the investigators have done here is collect data from everyone in their study, both groups, prior to delivering the intervention to one group and not to the other group (control group). Then, they collected the same information at the post-intervention time period for everyone in their study, both groups. The power in this design can be seen in the following diagrams that include arrows for the kinds of longitudinal pre-/post- comparisons that can be assessed, measuring change.
This is a pre-/post- comparison, indicating if change occurred with the intervention group after the intervention. Investigators would hope the answer is “yes” if it is believed that the intervention should make a difference. Similarly, the investigators could compare the non-intervention group at the two observation points (the lower arrow). Hopefully, there would be no significant change without the intervention.
You might be wondering, “Why would there be change in the no intervention group when nothing has been done to make them change?” Actually, with the passage of time, several possible explanatory events or processes could account for change.
- The first is simple maturation . Particularly with young children, developmental change happens over relatively short periods of time even without intervention.
- Similarly, the passage of time might account for symptom improvement even without intervention. Consider, for example, the old adage that, “if you treat a common cold it will take 7 to 10 days to get better; if you don’t treat it, it will take a week to a week-and-a-half.” Change without intervention is called spontaneous or natural change . Either way, there is change—with or without intervention.
- Third, given time, individuals in the no intervention group might seek out and receive other interventions not part of the study that also can produce change. This could be as simple as getting help and support from friends or family members; or, seeking help and advice on the internet; or, enrolling in other informal or formal treatment programs at the same time.
The benefit of combining the comparison groups and pre-/post- designs continues to emerge as we examine the next diagram showing comparisons an investigator can consider:
This group comparison of post-intervention results indicates whether there is a difference in outcomes between people who received the intervention and people who did not. Again, investigators would hope the answer is “yes,” and that the observed difference favors the intervention group. But with this design, an investigator can go even further in ruling out another possible explanation for outcome differences. Consider the power of this comparison:
By comparing the two groups BEFORE the intervention took place, an investigator can hopefully rule out the possibility that post-intervention group differences were actually a reflection of pre-existing differences; differences that existed prior to the intervention. In this case, the investigator would hope to observe no significant differences in this comparison. This “no differences” result would boost confidence in the conclusion that any observed post-intervention differences were a function of the intervention itself (since there were no pre-existing differences). Not being able to rule out this possibility was one limitation of the post-only group comparison strategy discussed earlier.
A Note About Non-Treatment, Placebo, and Treatment as Usual Comparison Groups .
What we have diagrammed above is a situation where the second group received no treatment at all. This, however, is problematic in three ways. First, the ethics of intentionally not serving individuals who are seeking help with a social work problem is concerning. Second, the scientific integrity of such studies tends to suffer because non-treatment “control” groups tend to have high rates of study drop out (attrition) as those participants seek help elsewhere. Third, the results of these studies are coming under great scrutiny as the world of behavioral science has come to realize that any treatment is likely to be better than no treatment—thus, study results where the tested intervention is significantly positive may be grossly over-interpreted. Compared to other treatments, what appears to be a fantastic innovation may be no better.
Medical studies often include a comparison group who receives a “fake” or neutral form of a medication or other intervention. In medicine, a placebo is an inert “treatment” with a substance that has no likely known effect on the condition being studies, such as a pill made of sugar, for example. The approximate equivalent in behavior al science is an intervention where clients are provided only with basic, factual information about the condition; nothing too empowering. In theory, neither the placebo medication nor the simple educational materials are expected to promote significant change. This is a slight variation on the non-treatment control condition. However, over 20 years of research provided evidence of a placebo effect that cannot be discounted. In one systematic review and meta-analysis study (Howick et al, 2016) the size of effect associated with placebos were no different or even larger than treatment effects. This placebo effect is, most likely, associated with the psychological principles of motivation and expectancies—expecting something to work has a powerful impact on behavior and outcomes, particularly with regard to symptoms of nausea and pain (Cherry, 2018). Of course, this means that participants receiving the placebo believe they are (or could be) receiving the therapeutic intervention. Also interesting to note is that participants sometimes report negative side-effects with exposure to the placebo treatment.
In medication trials, the introduction of a placebo may allow investigators to impose a double-blind structure to the study. A double-blind study is one where neither the patient/client nor the practitioner/clinician knows if the person is receiving the test medication or the placebo condition. The double-blind structure is imposed as means of reducing practitioner bias in the study results from either patient or practitioner/clinician beliefs about the experimental medication. However, it is difficult to disguise behavioral interventions from practitioners—it is not as simple as creating a real-looking pill or making distilled water look like medicine.
More popular in intervention science today is the use of a treatment as usual condition (TAU) . In a TAU study, the comparison group receives the same treatment that would have been provided without the study being conducted. This resolves the ethical concerns of intentionally denying care to someone seeking help simply to fulfill demands of a research design. It also helps resolve the study integrity concerns mentioned earlier regarding non-treatment studies. But this design looks a little bit different from the diagram of a comparison group design. You will notice that there is a second intervention “X” symbol in the diagram and that the new X has a subscript notation (TAU) so you can tell the two intervention groups apart, X and X TAU .
#7: Single-System Design .
A commonly applied means of evaluating practice is to employ a quasi-experimental single-system design. This approach uniquely combines aspect of the case study with aspects of the pre-experimental pre-/post-design. Like the case study, the data are collected for one single case at a time—whether the element or unit of study is an individual, couple, family, or larger group, the data represent the behavior of that element over time. In that sense, the single-system design is longitudinal—repeated measurements are drawn for the same element each time. It is a quasi-experimental design in that conditions are systematically varied, and outcomes measured for each variation. This approach to evaluating practice is often referred to as a single-subject design. However, the fact that the “subject” might be a larger system is lost in that label, hence a preference for the single-“system” design label.
The single-system design is sufficiently unique in implementation that an entirely different notation system is applied. Instead of the previous Xs and Os, we will be using A’s and B’s (even C’s and D’s). The first major distinction is that instead of a pre-intervention measurement (what we called O 1 ) we need a pre-intervention baseline. By definition, a line is the distance between two or more points, thus in order to be a baseline measurement, at least two and preferably at least 7 pre-intervention measurement points are utilized. For example, if the target of intervention with a family is that they spend more activity time together, a practitioner might have them maintain a daily calendar with the number of minutes spent in activity time is recorded for each of 7 days. This would be used as a baseline indication of the family’s behavioral norm. Thus, since there are 7 measurement points replacing the single pre-intervention observation (O 1 ), we designate this as A 1 , A 2 , A 3 and so forth to A 7 for days 1 through 7 during the baseline week–this might extend to A 30 for a month of baseline recording.
Next, we continue to measure the target behavior during the period of intervention. Because the intervention period is different from the baseline period, we use the letter B to indicate the change. This corresponds to the single point in our group designs where we used the letter X to designate the intervention. Typically, in a pre-/post- intervention study no data are collected during the intervention, X, which is another difference in single-system designs. Let’s say that the social worker’s intervention was to text the family members an assigned activity for each of 10 days, and they continued to record the number of minutes when they were engaged in family activity time each day. Because it is a series of measurement points, we use B 1 , B 2 , B 3 and so forth for the duration of the intervention, which is B 10 in this case. The next step is to remove the daily assignment text messages, giving them just one menu on the first of the next 7 days, from which the family is expected to pick an activity and continue to record time spent in family activity time each day. This would be a different form of intervention from the “B” condition, so it becomes C 1 , C 2 , C 3 , and so forth to the end, C 7 . Finally, the social worker no longer sends any cues, and the family simply records their daily family activity time for another week. This “no intervention” condition is the same as what happened at baseline before there was any intervention. So, the notation reverts back to A’s, but this time it is A 8 , A 9 , A 10 , and so forth to the end of the observation period, A 14 . For this reason, single system design studies are often referred to as “ABA” designs (initial baseline, an intervention line, and a post-intervention line) or, in our example an “ABCA” design since there was a second intervention after the “B” condition. This single-system design notation is different from the notation using Xs and Os because data are collected multiple times during each phase, rather than at each single point during pre-intervention, intervention, and post-intervention.
The data could be presented in a table format, but a visual graph depicts the trends in a more concise, communicative manner. What the practitioner is aiming to do with the family is to look at the pattern of behavior in as objective manner as possible, under the different manipulated conditions. Here is a graphical representation from one hypothetical family. As you can see, there is natural variation in the family’s behavior that would be missed if we simply used a single weekly value for the pre- and post-intervention periods instead.
Together, the practitioner and the client family can discuss what they see in the data. For example, what makes Wednesday family activity time particularly difficult to implement and does it matter given the what happens on the other days? How does it feel as a family to have the greater activity days, and is that rewarding to them? What happens when the social worker is no longer prompting them to engage in activities and how can they sustain their gains over time without outside intervention? What new skills did they learn that support sustainability? What will be their cue that a “booster” might be necessary? What the data allow is a clear evaluation of the intervention with this family, which is the main purpose of practice evaluation using the single-system design approach.
#8: Random Control Trial (RCT) Pre-/Post- Design.
The major difference between the comparison group pre-/post- design just discussed and the random control trial (RCT) with pre-/post- design is that investigators do not have to rely so much on hoping that the two comparison groups were initially equivalent: they randomly assign study participants to the two groups as an attempt to ensure that this is true. There is still no guarantee of initial group equivalence; this still needs to be assessed. But the investigators are less vulnerable to the bad luck of having the two comparison groups being initially different. This is what the RCT design looks like in a diagram:
Many authors of research textbooks describe the RCT as the “gold standard” of intervention research design because it addresses so many possible internal validity concerns. It is, indeed, a powerful experimental design. However, it is important to recognize that this type of design comes at a relatively high “cost” compared to some of the others we have discussed. Because there are two comparison groups being compared, there are more study participant costs involved than the single group designs. Because there are at least two points in time when data are collected, there are more data collection and participant retention costs involved than the single time point post-only designs. And random assignment to experimental conditions or groups is not always feasible in real-world intervention situations. For example, it is difficult to apply random assignment to conditions where placement is determined by court order, making it difficult to use an RCT design for comparing a jail diversion program (experimental intervention) with incarceration (treatment as usual control group). For these practical reasons, programs often settle on group comparison designs without random assignment in their evaluation efforts.
Back to Basics: Interpreting Statistics
Remembering how to interpret statistics in a research report. The investigators reported that for the intervention group, the mean reduction in AUDIT-12 scores was significantly greater for the SBI intervention group than for the TAU group on a one-way Anova: M=18.83 and 13.52 respectively, F (1,148)=6.34, p <.01. This statement indicates the following:
- The mean (M) for the innovative SBI intervention group =18.83
- The mean ( M ) for the treatment as usual (TAU) group =13.52.
- The computed F statistic in the analysis of variance (Anova) =6.34.
- Degrees of freedom for the Anova were 1 (2 groups -1) and 148 (149 cases – 1): df =(1, 148).
- The F -distribution test statistic at those degrees of freedom is significant at a level less than the criterion p <.05.
- We reject the null hypothesis of no difference between the groups, concluding that a significant difference exists.
A slight variant on this study design was used to compare a screening and brief intervention (SBI) for substance misuse problems to the usual (TAU) condition among 729 women preparing for release from jail and measured again during early reentry to community living (Begun, Rose, & LeBel, 2011). One-third of the women who had positive screen results for a potential substance use disorder were randomly assigned to the treatment as usual condition (X TAU ), and two-thirds to the SBI experimental intervention condition (X) as diagrammed below. The investigators followed 149 women three months into post-release community reentry ( follow-up observation), and found that women receiving the innovative SBI intervention had better outcomes for drinking and drug use during early reentry (three months post-release): the mean difference in scores on the AUDIT-12 screening instrument prior to jail compared to reentry after release was more than 5 points (see Back to Basics Box for more details). The random controlled trial with pre/post/follow-up study design looked like this:
#9: Random Control Trial (RCT) Comparing Two (or More) Interventions.
The only difference between the design just discussed and the random control trial that compares two interventions is the indicator associated with the intervention symbol, X. The X 1 and X 2 refer to two different interventions, the same way the X 1 and X TAU reflected two different intervention conditions above. This kind of design is used to determine which of two interventions has greater effectiveness, especially if there is a considerable difference in cost of delivering them. The results can help with a cost-benefit comparison which program administrators and policy decision-makers use to help make decisions about which to fund. It is possible to compare more than two interventions by simply adding additional lines for each group (X 3 , X 4 , and so on). Each group added, however, also adds considerably to the “costs” of conducting the study.
The COMBINE Project was an historic example where multiple intervention approaches were compared for their effectiveness in treating alcohol dependence (see NIAAA Overview of COMBINE at https://pubs.niaaa.nih.gov/publications/combine/overview.htm ).
Two medications were being compared (acamprosate and naltrexone) along with medical management counseling for adherence to the medication protocol (MM), and medication with MM was compared to medication with MM plus a cognitive behavioral psychotherapy intervention (CBI). Clients were randomly assigned to 9 groups with approximately 153 clients per group:
X 1 -acamprosate placebo + naltrexone placebo + MM (no CBI)
X 2 -acamprosate + naltrexone placebo + MM (no CBI)
X 3 -naltrexone + acamprosate placebo + MM (no CBI)
X 4 -acamprosate + naltrexone + MM (no CBI)
X 5 – acamprosate placebo + naltrexone placebo + MM + CBI
X 6 – acamprosate + naltrexone placebo + MM + CBI
X 7 – naltrexone + acamprosate placebo + MM + CBI
X 8 – acamprosate + naltrexone + MM + CBI
X 9 -no pills or MM, CBI only
The COMBINE Project was an extremely “costly” study to conduct given the number of study participants needed to meet the study’s design demands–one reason why it was a collaboration across multiple study sites. The results included participants in all 9 groups showing at least some improvement as measured by a reduction in drinking–intervention is almost always an advantage over no intervention. The poorest results were observed for the last group (X 9 ), those receiving the specialty cognitive behavioral intervention (CBI) alone, with no medication or placebo medication. Surprising was that best outcomes were observed for the group receiving CBI with both placebo medications and medication management (MM) counseling (X 5 )! The other group with best outcomes was the group receiving naltrexone with MM counseling but no CBI (X 3 ). The investigative team concluded that pharmacotherapy combined with medication counseling can yield clinically significant alcohol treatment outcomes, and this can be delivered in primary care settings where specialty alcohol treatment is unavailable (Pettinati, Anton, & Willenbring, 2006). They were surprised, also, by the lack of observed advantage to using the medication acamprosate because it had so much evidence support from studies conducted in European countries. This map of the design does not show the multiple follow-up observations conducted in the study.
#10: Comparison Group with a Delayed Intervention Condition.
One way of overcoming the ethical dilemma of an intervention/no intervention study design is to offer the intervention condition later for the second group. Not only does this more evenly distribute risks and benefits potentially experienced across both groups, it also makes sense scientifically and practically because it allows another set of comparison conditions to be longitudinally analyzed for a relatively smaller additional “cost.” Here is what this design strategy might look like using our study design notation:
Look at all the information available through this design:
This study design was used to examine the effectiveness of equine-assisted therapy for persons who have dementia (Dabelko-Schoeny, et al., 2014). The study engaged 16 participants with Alzheimer’s Disease in activities with four horses (grooming, observing, interacting, leading, and photographing). Study participants were randomly assigned to two groups: one received the intervention immediately the other did not; however, this latter group received the intervention later, after the intervention was withdrawn from the first group. When not receiving the equine-assisted therapy, participants received the usual services. Thus, the investigators were able to make multiple types of comparisons and ensured that all participants had the opportunity to experience the novel intervention. The team reported that the equine-assisted therapy was significantly associated with lower rates of problematic/disruptive behavior being exhibited by the participants in their nursing homes and higher levels of “good stress” (sense of exhilaration or accomplishment) as measured by salivary cortisol levels.
Major disadvantages of this study design:
- it may be that social work interventions are best delivered right away, when the need arises, and that their impact is diminished when delivered after a significant delay;
- the design is costlier to implement than either design alone might be, because it requires more participants and that they be retained over a longer period of time.
Chapter Conclusion
As you can see, social work professionals and investigators have numerous options available to them in planning how to study the impact of interventions and evaluate their practices, programs, and policies. Each option has pros and cons, advantages and disadvantages, costs and benefits that need to be weighed in making the study design decisions. New decision points arise with planning processes related to study measurement and participant inclusion. These are explored in our next chapters.
Stop and Think
Social Work 3402 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
IMAGES
VIDEO
COMMENTS
Study with Quizlet and memorise flashcards containing terms like one shot case study design, one group pretest-post-test design, static group comparison and others. ... experimental designs that are stronger than pre-experimental designs. they are variations on the classical experimental design that an experimenter uses in special situations or ...
Study with Quizlet and memorize flashcards containing terms like Pre-Experimental Designs, One Group Posttest Only Design, One Group Pretest Posttest Design and more.
Study with Quizlet and memorize flashcards containing terms like O X O, what does "R" stand for in design notation, what does "N" stand for in design notation and more.
Pre-experimental designs, thus, are usually conducted as a first step towards establishing the evidence for or against an intervention. However, this type of design comes with some unique disadvantages, which we'll describe below. A commonly used type of pre-experiment is the one-group pretest post-test design. In this design, pre- and ...
Even when pre-experimental designs identify a comparison group, it is still difficult to dismiss rival hypotheses for the observed change. This is because there is no formal way to determine whether the two groups would have been the same if it had not been for the treatment. If the treatment group and the comparison group differ after the ...
These designs offer a starting point for researchers but are typically seen as less reliable than more controlled experimental designs due to the lack of randomization and the potential for confounding variables. Characteristics of Pre-Experimental Design. Pre-experimental designs are characterized by their simplicity and ease of execution.
As the name suggests, pre-experimental design happens even before the true experiment starts. This is done to determine the researchers' intervention on a group of people. This will help them tell if the investment of cost and time for conducting a true experiment is worth a while. Hence, pre-experimental design is a preliminary step to justify the presence of the researcher's intervention.
Pre-experimental designs, thus, are usually conducted as a first step towards establishing the evidence for or against an intervention. However, this type of design comes with some unique disadvantages, which we'll describe as we review the pre-experimental designs available.
Study with Quizlet and memorize flashcards containing terms like One Shot Case Study design, One Group Pretest-Posttest Design, Static-Group Comparison Design and more. ... Pre-experimental Designs. Flashcards. Learn. Test. Match. Term. 1 / 3. One Shot Case Study design.
This pre-/post-design remains a pre-experimental design because it lacks a comparison (control) group. Because the data are collected from the same individuals at two time points, this strategy is considered longitudinal in nature. This type of pre-/post-intervention design allows us to directly identify change where observed differences on a ...