Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Prevent plagiarism. Run a free check.

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved June 26, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Last updated 27/06/24: Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

example of quasi experimental research pdf

  • > The Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences
  • > Quasi-Experimental Research

example of quasi experimental research pdf

Book contents

  • The Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences
  • Cambridge Handbooks in Psychology
  • Copyright page
  • Contributors
  • Part I From Idea to Reality: The Basics of Research
  • Part II The Building Blocks of a Study
  • Part III Data Collection
  • 13 Cross-Sectional Studies
  • 14 Quasi-Experimental Research
  • 15 Non-equivalent Control Group Pretest–Posttest Design in Social and Behavioral Research
  • 16 Experimental Methods
  • 17 Longitudinal Research: A World to Explore
  • 18 Online Research Methods
  • 19 Archival Data
  • 20 Qualitative Research Design
  • Part IV Statistical Approaches
  • Part V Tips for a Successful Research Career

14 - Quasi-Experimental Research

from Part III - Data Collection

Published online by Cambridge University Press:  25 May 2023

In this chapter, we discuss the logic and practice of quasi-experimentation. Specifically, we describe four quasi-experimental designs – one-group pretest–posttest designs, non-equivalent group designs, regression discontinuity designs, and interrupted time-series designs – and their statistical analyses in detail. Both simple quasi-experimental designs and embellishments of these simple designs are presented. Potential threats to internal validity are illustrated along with means of addressing their potentially biasing effects so that these effects can be minimized. In contrast to quasi-experiments, randomized experiments are often thought to be the gold standard when estimating the effects of treatment interventions. However, circumstances frequently arise where quasi-experiments can usefully supplement randomized experiments or when quasi-experiments can fruitfully be used in place of randomized experiments. Researchers need to appreciate the relative strengths and weaknesses of the various quasi-experiments so they can choose among pre-specified designs or craft their own unique quasi-experiments.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • Quasi-Experimental Research
  • By Charles S. Reichardt , Daniel Storage , Damon Abraham
  • Edited by Austin Lee Nichols , Central European University, Vienna , John Edlund , Rochester Institute of Technology, New York
  • Book: The Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences
  • Online publication: 25 May 2023
  • Chapter DOI: https://doi.org/10.1017/9781009010054.015

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Quasi-Experimental Research

Learning objectives.

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001). Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952). But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:

http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980). They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Han Eysenck

In a classic 1952 article, researcher Hans Eysenck pointed out the shortcomings of the simple pretest-posttest design for evaluating the effectiveness of psychotherapy.

Wikimedia Commons – CC BY-SA 3.0.

Interrupted Time Series Design

A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979). Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Figure 7.5 A Hypothetical Interrupted Time-Series Design

A Hypothetical Interrupted Time-Series Design - The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not

The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two college professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.

Discussion: Imagine that a group of obese children is recruited for a study in which their weight is measured, then they participate for 3 months in a program that encourages them to be more active, and finally their weight is measured again. Explain how each of the following might affect the results:

  • regression to the mean
  • spontaneous remission

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324.

Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146.

Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Exploring Experimental Research: Methodologies, Designs, and Applications Across Disciplines

  • SSRN Electronic Journal

Sereyrath Em at The National University of Cheasim Kamchaymear

  • The National University of Cheasim Kamchaymear

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Sereyrath Em

  • COMPUT COMMUN REV

Anastasius Gavras

  • Debbie Rohwer

Sokhom Chan

  • Sorakrich Maneewan
  • Ravinder Koul
  • Int J Contemp Hospit Manag

Anna Mattila

  • J EXP ANAL BEHAV
  • Alan E. Kazdin
  • Jimmie Leppink
  • Keith Morrison
  • Louis Cohen
  • Lawrence Manion
  • ACCOUNT ORG SOC
  • Wim A. Van der Stede
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Am Med Inform Assoc
  • v.13(1); Jan-Feb 2006

The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

Associated data.

Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes.

Quasi-experimental studies encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized controlled trial. Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. As another example, an informatics technology group is introducing a pharmacy order-entry system aimed at decreasing pharmacy costs. The intervention is implemented and pharmacy costs before and after the intervention are measured.

In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical informatics as well as in other medical disciplines. However, little is written about these study designs in the medical literature or in traditional epidemiology textbooks. 1 , 2 , 3 In contrast, the social sciences literature is replete with examples of ways to implement and improve quasi-experimental studies. 4 , 5 , 6

In this paper, we review the different pretest-posttest quasi-experimental study designs, their nomenclature, and the relative hierarchy of these designs with respect to their ability to establish causal associations between an intervention and an outcome. The example of a pharmacy order-entry system aimed at decreasing pharmacy costs will be used throughout this article to illustrate the different quasi-experimental designs. We discuss limitations of quasi-experimental designs and offer methods to improve them. We also perform a systematic review of four years of publications from two informatics journals to determine the number of quasi-experimental studies, classify these studies into their application domains, determine whether the potential limitations of quasi-experimental studies were acknowledged by the authors, and place these studies into the above-mentioned relative hierarchy.

The authors reviewed articles and book chapters on the design of quasi-experimental studies. 4 , 5 , 6 , 7 , 8 , 9 , 10 Most of the reviewed articles referenced two textbooks that were then reviewed in depth. 4 , 6

Key advantages and disadvantages of quasi-experimental studies, as they pertain to the study of medical informatics, were identified. The potential methodological flaws of quasi-experimental medical informatics studies, which have the potential to introduce bias, were also identified. In addition, a summary table outlining a relative hierarchy and nomenclature of quasi-experimental study designs is described. In general, the higher the design is in the hierarchy, the greater the internal validity that the study traditionally possesses because the evidence of the potential causation between the intervention and the outcome is strengthened. 4

We then performed a systematic review of four years of publications from two informatics journals. First, we determined the number of quasi-experimental studies. We then classified these studies on the above-mentioned hierarchy. We also classified the quasi-experimental studies according to their application domain. The categories of application domains employed were based on categorization used by Yearbooks of Medical Informatics 1992–2005 and were similar to the categories of application domains employed by Annual Symposiums of the American Medical Informatics Association. 11 The categories were (1) health and clinical management; (2) patient records; (3) health information systems; (4) medical signal processing and biomedical imaging; (5) decision support, knowledge representation, and management; (6) education and consumer informatics; and (7) bioinformatics. Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of lack of randomization, the potential for regression to the mean, the presence of temporal confounders and the mention of another design that would have more internal validity.

All original scientific manuscripts published between January 2000 and December 2003 in the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI) were reviewed. One author (ADH) reviewed all the papers to identify the number of quasi-experimental studies. Other authors (ADH, JCM, JF) then independently reviewed all the studies identified as quasi-experimental. The three authors then convened as a group to resolve any disagreements in study classification, application domain, and acknowledgment of limitations.

Results and Discussion

What is a quasi-experiment.

Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. Quasi-experimental studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups.

Using this basic definition, it is evident that many published studies in medical informatics utilize the quasi-experimental design. Although the randomized controlled trial is generally considered to have the highest level of credibility with regard to assessing causality, in medical informatics, researchers often choose not to randomize the intervention for one or more reasons: (1) ethical considerations, (2) difficulty of randomizing subjects, (3) difficulty to randomize by locations (e.g., by wards), (4) small available sample size. Each of these reasons is discussed below.

Ethical considerations typically will not allow random withholding of an intervention with known efficacy. Thus, if the efficacy of an intervention has not been established, a randomized controlled trial is the design of choice to determine efficacy. But if the intervention under study incorporates an accepted, well-established therapeutic intervention, or if the intervention has either questionable efficacy or safety based on previously conducted studies, then the ethical issues of randomizing patients are sometimes raised. In the area of medical informatics, it is often believed prior to an implementation that an informatics intervention will likely be beneficial and thus medical informaticians and hospital administrators are often reluctant to randomize medical informatics interventions. In addition, there is often pressure to implement the intervention quickly because of its believed efficacy, thus not allowing researchers sufficient time to plan a randomized trial.

For medical informatics interventions, it is often difficult to randomize the intervention to individual patients or to individual informatics users. So while this randomization is technically possible, it is underused and thus compromises the eventual strength of concluding that an informatics intervention resulted in an outcome. For example, randomly allowing only half of medical residents to use pharmacy order-entry software at a tertiary care hospital is a scenario that hospital administrators and informatics users may not agree to for numerous reasons.

Similarly, informatics interventions often cannot be randomized to individual locations. Using the pharmacy order-entry system example, it may be difficult to randomize use of the system to only certain locations in a hospital or portions of certain locations. For example, if the pharmacy order-entry system involves an educational component, then people may apply the knowledge learned to nonintervention wards, thereby potentially masking the true effect of the intervention. When a design using randomized locations is employed successfully, the locations may be different in other respects (confounding variables), and this further complicates the analysis and interpretation.

In situations where it is known that only a small sample size will be available to test the efficacy of an intervention, randomization may not be a viable option. Randomization is beneficial because on average it tends to evenly distribute both known and unknown confounding variables between the intervention and control group. However, when the sample size is small, randomization may not adequately accomplish this balance. Thus, alternative design and analytical methods are often used in place of randomization when only small sample sizes are available.

What Are the Threats to Establishing Causality When Using Quasi-experimental Designs in Medical Informatics?

The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention. Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing. These rival hypotheses, or alternative explanations, arise from principles of epidemiologic study design.

Shadish et al. 4 outline nine threats to internal validity that are outlined in ▶ . Internal validity is defined as the degree to which observed changes in outcomes can be correctly inferred to be caused by an exposure or an intervention. In quasi-experimental studies of medical informatics, we believe that the methodological principles that most often result in alternative explanations for the apparent causal effect include (a) difficulty in measuring or controlling for important confounding variables, particularly unmeasured confounding variables, which can be viewed as a subset of the selection threat in ▶ ; (b) results being explained by the statistical principle of regression to the mean . Each of these latter two principles is discussed in turn.

Threats to Internal Validity

1. Ambiguous temporal precedence: Lack of clarity about whether intervention occurred before outcome
2. Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect
3. History: Events occurring concurrently with intervention could cause the observed effect
4. Maturation: Naturally occurring changes over time could be confused with a treatment effect
5. Regression: When units are selected for their extreme scores, they will often have less extreme subsequent scores, an occurrence that can be confused with an intervention effect
6. Attrition: Loss of respondents can produce artifactual effects if that loss is correlated with intervention
7. Testing: Exposure to a test can affect scores on subsequent exposures to that test
8. Instrumentation: The nature of a measurement may change over time or conditions
9. Interactive effects: The impact of an intervention may depend on the level of another intervention

Adapted from Shadish et al. 4

An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable. For example, in a study aiming to demonstrate that the introduction of a pharmacy order-entry system led to lower pharmacy costs, there are a number of important potential confounding variables (e.g., severity of illness of the patients, knowledge and experience of the software users, other changes in hospital policy) that may have differed in the preintervention and postintervention time periods ( ▶ ). In a multivariable regression, the first confounding variable could be addressed with severity of illness measures, but the second confounding variable would be difficult if not nearly impossible to measure and control. In addition, potential confounding variables that are unmeasured or immeasurable cannot be controlled for in nonrandomized quasi-experimental study designs and can only be properly controlled by the randomization process in randomized controlled trials.

An external file that holds a picture, illustration, etc.
Object name is 16f01.jpg

Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.

Another important threat to establishing causality is regression to the mean. 12 , 13 , 14 This widespread statistical phenomenon can result in wrongly concluding that an effect is due to the intervention when in reality it is due to chance. The phenomenon was first described in 1886 by Francis Galton who measured the adult height of children and their parents. He noted that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents, and conversely, when the average height of the parents was shorter than the population mean, the children tended to be taller than their parents.

In medical informatics, what often triggers the development and implementation of an intervention is a rise in the rate above the mean or norm. For example, increasing pharmacy costs and adverse events may prompt hospital informatics personnel to design and implement pharmacy order-entry systems. If this rise in costs or adverse events is really just an extreme observation that is still within the normal range of the hospital's pharmaceutical costs (i.e., the mean pharmaceutical cost for the hospital has not shifted), then the statistical principle of regression to the mean predicts that these elevated rates will tend to decline even without intervention. However, often informatics personnel and hospital administrators cannot wait passively for this decline to occur. Therefore, hospital personnel often implement one or more interventions, and if a decline in the rate occurs, they may mistakenly conclude that the decline is causally related to the intervention. In fact, an alternative explanation for the finding could be regression to the mean.

What Are the Different Quasi-experimental Study Designs?

In the social sciences literature, quasi-experimental studies are divided into four study design groups 4 , 6 :

  • Quasi-experimental designs without control groups
  • Quasi-experimental designs that use control groups but no pretest
  • Quasi-experimental designs that use control groups and pretests
  • Interrupted time-series designs

There is a relative hierarchy within these categories of study designs, with category D studies being sounder than categories C, B, or A in terms of establishing causality. Thus, if feasible from a design and implementation point of view, investigators should aim to design studies that fall in to the higher rated categories. Shadish et al. 4 discuss 17 possible designs, with seven designs falling into category A, three designs in category B, and six designs in category C, and one major design in category D. In our review, we determined that most medical informatics quasi-experiments could be characterized by 11 of 17 designs, with six study designs in category A, one in category B, three designs in category C, and one design in category D because the other study designs were not used or feasible in the medical informatics literature. Thus, for simplicity, we have summarized the 11 study designs most relevant to medical informatics research in ▶ .

Relative Hierarchy of Quasi-experimental Designs

Quasi-experimental Study DesignsDesign Notation
A. Quasi-experimental designs without control groups
    1. The one-group posttest-only designX O1
    2. The one-group pretest-posttest designO1 X O2
    3. The one-group pretest-posttest design using a double pretestO1 O2 X O3
    4. The one-group pretest-posttest design using a nonequivalent dependent variable(O1a, O1b) X (O2a, O2b)
    5. The removed-treatment designO1 X O2 O3 removeX O4
    6. The repeated-treatment designO1 X O2 removeX O3 X O4
B. Quasi-experimental designs that use a control group but no pretest
    1. Posttest-only design with nonequivalent groupsIntervention group: X O1
Control group: O2
C. Quasi-experimental designs that use control groups and pretests
    1. Untreated control group with dependent pretest and posttest samplesIntervention group: O1a X O2a
Control group: O1b O2b
    2. Untreated control group design with dependent pretest and posttest samples using a double pretestIntervention group: O1a O2a X O3a
Control group: O1b O2b O3b
    3. Untreated control group design with dependent pretest and posttest samples using switching replicationsIntervention group: O1a X O2a O3a
Control group: O1b O2b X O3b
D. Interrupted time-series design
    1. Multiple pretest and posttest observations spaced at equal intervals of timeO1 O2 O3 O4 O5 X O6 O7 O8 O9 O10

O = Observational Measurement; X = Intervention Under Study. Time moves from left to right.

The nomenclature and relative hierarchy were used in the systematic review of four years of JAMIA and the IJMI. Similar to the relative hierarchy that exists in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series, the hierarchy in ▶ is not absolute in that in some cases, it may be infeasible to perform a higher level study. For example, there may be instances where an A6 design established stronger causality than a B1 design. 15 , 16 , 17

Quasi-experimental Designs without Control Groups

equation M1

Here, X is the intervention and O is the outcome variable (this notation is continued throughout the article). In this study design, an intervention (X) is implemented and a posttest observation (O1) is taken. For example, X could be the introduction of a pharmacy order-entry intervention and O1 could be the pharmacy costs following the intervention. This design is the weakest of the quasi-experimental designs that are discussed in this article. Without any pretest observations or a control group, there are multiple threats to internal validity. Unfortunately, this study design is often used in medical informatics when new software is introduced since it may be difficult to have pretest measurements due to time, technical, or cost constraints.

equation M2

This is a commonly used study design. A single pretest measurement is taken (O1), an intervention (X) is implemented, and a posttest measurement is taken (O2). In this instance, period O1 frequently serves as the “control” period. For example, O1 could be pharmacy costs prior to the intervention, X could be the introduction of a pharmacy order-entry system, and O2 could be the pharmacy costs following the intervention. Including a pretest provides some information about what the pharmacy costs would have been had the intervention not occurred.

equation M3

The advantage of this study design over A2 is that adding a second pretest prior to the intervention helps provide evidence that can be used to refute the phenomenon of regression to the mean and confounding as alternative explanations for any observed association between the intervention and the posttest outcome. For example, in a study where a pharmacy order-entry system led to lower pharmacy costs (O3 < O2 and O1), if one had two preintervention measurements of pharmacy costs (O1 and O2) and they were both elevated, this would suggest that there was a decreased likelihood that O3 is lower due to confounding and regression to the mean. Similarly, extending this study design by increasing the number of measurements postintervention could also help to provide evidence against confounding and regression to the mean as alternate explanations for observed associations.

equation M4

This design involves the inclusion of a nonequivalent dependent variable ( b ) in addition to the primary dependent variable ( a ). Variables a and b should assess similar constructs; that is, the two measures should be affected by similar factors and confounding variables except for the effect of the intervention. Variable a is expected to change because of the intervention X, whereas variable b is not. Taking our example, variable a could be pharmacy costs and variable b could be the length of stay of patients. If our informatics intervention is aimed at decreasing pharmacy costs, we would expect to observe a decrease in pharmacy costs but not in the average length of stay of patients. However, a number of important confounding variables, such as severity of illness and knowledge of software users, might affect both outcome measures. Thus, if the average length of stay did not change following the intervention but pharmacy costs did, then the data are more convincing than if just pharmacy costs were measured.

The Removed-Treatment Design

equation M5

This design adds a third posttest measurement (O3) to the one-group pretest-posttest design and then removes the intervention before a final measure (O4) is made. The advantage of this design is that it allows one to test hypotheses about the outcome in the presence of the intervention and in the absence of the intervention. Thus, if one predicts a decrease in the outcome between O1 and O2 (after implementation of the intervention), then one would predict an increase in the outcome between O3 and O4 (after removal of the intervention). One caveat is that if the intervention is thought to have persistent effects, then O4 needs to be measured after these effects are likely to have disappeared. For example, a study would be more convincing if it demonstrated that pharmacy costs decreased after pharmacy order-entry system introduction (O2 and O3 less than O1) and that when the order-entry system was removed or disabled, the costs increased (O4 greater than O2 and O3 and closer to O1). In addition, there are often ethical issues in this design in terms of removing an intervention that may be providing benefit.

The Repeated-Treatment Design

equation M6

The advantage of this design is that it demonstrates reproducibility of the association between the intervention and the outcome. For example, the association is more likely to be causal if one demonstrates that a pharmacy order-entry system results in decreased pharmacy costs when it is first introduced and again when it is reintroduced following an interruption of the intervention. As for design A5, the assumption must be made that the effect of the intervention is transient, which is most often applicable to medical informatics interventions. Because in this design, subjects may serve as their own controls, this may yield greater statistical efficiency with fewer numbers of subjects.

Quasi-experimental Designs That Use a Control Group but No Pretest

equation M7

An intervention X is implemented for one group and compared to a second group. The use of a comparison group helps prevent certain threats to validity including the ability to statistically adjust for confounding variables. Because in this study design, the two groups may not be equivalent (assignment to the groups is not by randomization), confounding may exist. For example, suppose that a pharmacy order-entry intervention was instituted in the medical intensive care unit (MICU) and not the surgical intensive care unit (SICU). O1 would be pharmacy costs in the MICU after the intervention and O2 would be pharmacy costs in the SICU after the intervention. The absence of a pretest makes it difficult to know whether a change has occurred in the MICU. Also, the absence of pretest measurements comparing the SICU to the MICU makes it difficult to know whether differences in O1 and O2 are due to the intervention or due to other differences in the two units (confounding variables).

Quasi-experimental Designs That Use Control Groups and Pretests

The reader should note that with all the studies in this category, the intervention is not randomized. The control groups chosen are comparison groups. Obtaining pretest measurements on both the intervention and control groups allows one to assess the initial comparability of the groups. The assumption is that if the intervention and the control groups are similar at the pretest, the smaller the likelihood there is of important confounding variables differing between the two groups.

equation M8

The use of both a pretest and a comparison group makes it easier to avoid certain threats to validity. However, because the two groups are nonequivalent (assignment to the groups is not by randomization), selection bias may exist. Selection bias exists when selection results in differences in unit characteristics between conditions that may be related to outcome differences. For example, suppose that a pharmacy order-entry intervention was instituted in the MICU and not the SICU. If preintervention pharmacy costs in the MICU (O1a) and SICU (O1b) are similar, it suggests that it is less likely that there are differences in the important confounding variables between the two units. If MICU postintervention costs (O2a) are less than preintervention MICU costs (O1a), but SICU costs (O1b) and (O2b) are similar, this suggests that the observed outcome may be causally related to the intervention.

equation M9

In this design, the pretests are administered at two different times. The main advantage of this design is that it controls for potentially different time-varying confounding effects in the intervention group and the comparison group. In our example, measuring points O1 and O2 would allow for the assessment of time-dependent changes in pharmacy costs, e.g., due to differences in experience of residents, preintervention between the intervention and control group, and whether these changes were similar or different.

equation M10

With this study design, the researcher administers an intervention at a later time to a group that initially served as a nonintervention control. The advantage of this design over design C2 is that it demonstrates reproducibility in two different settings. This study design is not limited to two groups; in fact, the study results have greater validity if the intervention effect is replicated in different groups at multiple times. In the example of a pharmacy order-entry system, one could implement or intervene in the MICU and then at a later time, intervene in the SICU. This latter design is often very applicable to medical informatics where new technology and new software is often introduced or made available gradually.

Interrupted Time-Series Designs

equation M11

An interrupted time-series design is one in which a string of consecutive observations equally spaced in time is interrupted by the imposition of a treatment or intervention. The advantage of this design is that with multiple measurements both pre- and postintervention, it is easier to address and control for confounding and regression to the mean. In addition, statistically, there is a more robust analytic capability, and there is the ability to detect changes in the slope or intercept as a result of the intervention in addition to a change in the mean values. 18 A change in intercept could represent an immediate effect while a change in slope could represent a gradual effect of the intervention on the outcome. In the example of a pharmacy order-entry system, O1 through O5 could represent monthly pharmacy costs preintervention and O6 through O10 monthly pharmacy costs post the introduction of the pharmacy order-entry system. Interrupted time-series designs also can be further strengthened by incorporating many of the design features previously mentioned in other categories (such as removal of the treatment, inclusion of a nondependent outcome variable, or the addition of a control group).

Systematic Review Results

The results of the systematic review are in ▶ . In the four-year period of JAMIA publications that the authors reviewed, 25 quasi-experimental studies among 22 articles were published. Of these 25, 15 studies were of category A, five studies were of category B, two studies were of category C, and no studies were of category D. Although there were no studies of category D (interrupted time-series analyses), three of the studies classified as category A had data collected that could have been analyzed as an interrupted time-series analysis. Nine of the 25 studies (36%) mentioned at least one of the potential limitations of the quasi-experimental study design. In the four-year period of IJMI publications reviewed by the authors, nine quasi-experimental studies among eight manuscripts were published. Of these nine, five studies were of category A, one of category B, one of category C, and two of category D. Two of the nine studies (22%) mentioned at least one of the potential limitations of the quasi-experimental study design.

Systematic Review of Four Years of Quasi-designs in JAMIA

StudyJournalInformatics Topic CategoryQuasi-experimental DesignLimitation of Quasi-design Mentioned in Article
Staggers and Kobus JAMIA1Counterbalanced study designYes
Schriger et al. JAMIA1A5Yes
Patel et al. JAMIA2A5 (study 1, phase 1)No
Patel et al. JAMIA2A2 (study 1, phase 2)No
Borowitz JAMIA1A2No
Patterson and Harasym JAMIA6C1Yes
Rocha et al. JAMIA5A2Yes
Lovis et al. JAMIA1Counterbalanced study designNo
Hersh et al. JAMIA6B1No
Makoul et al. JAMIA2B1Yes
Ruland JAMIA3B1No
DeLusignan et al. JAMIA1A1No
Mekhjian et al. JAMIA1A2 (study design 1)Yes
Mekhjian et al. JAMIA1B1 (study design 2)Yes
Ammenwerth et al. JAMIA1A2No
Oniki et al. JAMIA5C1Yes
Liederman and Morefield JAMIA1A1 (study 1)No
Liederman and Morefield JAMIA1A2 (study 2)No
Rotich et al. JAMIA2A2 No
Payne et al. JAMIA1A1No
Hoch et al. JAMIA3A2 No
Laerum et al. JAMIA1B1Yes
Devine et al. JAMIA1Counterbalanced study design
Dunbar et al. JAMIA6A1
Lenert et al. JAMIA6A2
Koide et al. IJMI5D4No
Gonzalez-Hendrich et al. IJMI2A1No
Anantharaman and Swee Han IJMI3B1No
Chae et al. IJMI6A2No
Lin et al. IJMI3A1No
Mikulich et al. IJMI1A2Yes
Hwang et al. IJMI1A2Yes
Park et al. IJMI1C2No
Park et al. IJMI1D4No

JAMIA = Journal of the American Medical Informatics Association; IJMI = International Journal of Medical Informatics.

In addition, three studies from JAMIA were based on a counterbalanced design. A counterbalanced design is a higher order study design than other studies in category A. The counterbalanced design is sometimes referred to as a Latin-square arrangement. In this design, all subjects receive all the different interventions but the order of intervention assignment is not random. 19 This design can only be used when the intervention is compared against some existing standard, for example, if a new PDA-based order entry system is to be compared to a computer terminal–based order entry system. In this design, all subjects receive the new PDA-based order entry system and the old computer terminal-based order entry system. The counterbalanced design is a within-participants design, where the order of the intervention is varied (e.g., one group is given software A followed by software B and another group is given software B followed by software A). The counterbalanced design is typically used when the available sample size is small, thus preventing the use of randomization. This design also allows investigators to study the potential effect of ordering of the informatics intervention.

Although quasi-experimental study designs are ubiquitous in the medical informatics literature, as evidenced by 34 studies in the past four years of the two informatics journals, little has been written about the benefits and limitations of the quasi-experimental approach. As we have outlined in this paper, a relative hierarchy and nomenclature of quasi-experimental study designs exist, with some designs being more likely than others to permit causal interpretations of observed associations. Strengths and limitations of a particular study design should be discussed when presenting data collected in the setting of a quasi-experimental study. Future medical informatics investigators should choose the strongest design that is feasible given the particular circumstances.

Supplementary Material

Dr. Harris was supported by NIH grants K23 AI01752-01A1 and R01 AI60859-01A1. Dr. Perencevich was supported by a VA Health Services Research and Development Service (HSR&D) Research Career Development Award (RCD-02026-1). Dr. Finkelstein was supported by NIH grant RO1 HL71690.

Quasi-experimental designs for causal inference: an overview

  • Published: 26 June 2024

Cite this article

example of quasi experimental research pdf

  • Heining Cham   ORCID: orcid.org/0000-0002-2933-056X 1 ,
  • Hyunjung Lee 1 &
  • Igor Migunov 1  

Explore all metrics

The randomized control trial (RCT) is the primary experimental design in education research due to its strong internal validity for causal inference. However, in situations where RCTs are not feasible or ethical, quasi-experiments are alternatives to establish causal inference. This paper serves as an introduction to several quasi-experimental designs: regression discontinuity design, difference-in-differences analysis, interrupted time series design, instrumental variable analysis, and propensity score analysis with examples in education research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

example of quasi experimental research pdf

The search engine by EBSCO does not offer searches within the publications’ keywords. We replicated the same search in PsycINFO, and its search engine allows searches within the publications’ keywords. The results from PsycINFO were, in general, consistent with the results from ERIC and are available upon request.

Latif and Miles ( 2020 ) had another group of students who were given in-class quizzes after midterm #1. For simplicity, we did not include this group in this paper.

Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91 (434), 444–455. https://doi.org/10.1080/01621459.1996.10476902

Article   Google Scholar  

Arpino, B., & Mealli, F. (2011). The specification of the propensity score in multilevel observational studies. Computational Statistics & Data Analysis, 55 (4), 1770–1780. https://doi.org/10.1016/j.csda.2010.11.008

Austin, P. C. (2009). Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Statistics in Medicine, 28 (25), 3083–3107. https://doi.org/10.1002/sim.3697

Austin, P. C. (2014). A comparison of 12 algorithms for matching on the propensity score. Statistics in Medicine, 33 (6), 1057–1069. https://doi.org/10.1002/sim.6004

Baiocchi, M., Cheng, J., & Small, D. S. (2014). Tutorial in biostatistics: Instrumental variable methods for causal inference. Statistics in Medicine, 33 (13), 2297–2340. https://doi.org/10.1002/sim.6128

Bloom, H. S. (2012). Modern regression discontinuity analysis. Journal of Research on Educational Effectiveness, 5 (1), 43–82. https://doi.org/10.1080/19345747.2011.578707

Cannas, M., & Arpino, B. (2019). A comparison of machine learning algorithms and covariate balance measures for propensity score matching and weighting. Biometrical Journal, 61 (4), 1049–1072. https://doi.org/10.1002/bimj.201800132

Cham, H. (2022). Quasi-experimental designs. In G. J. G. Asmundson (Ed.), Comprehensive clinical psychology (2nd ed., pp. 29–48). Elsevier.

Chapter   Google Scholar  

Cham, H., & West, S. G. (2016). Propensity score analysis with missing data. Psychological Methods, 21 (3), 427–445. https://doi.org/10.1037/met0000076

Collier, Z. K., Zhang, H., & Liu, L. (2022). Explained: Artificial intelligence for propensity score estimation in multilevel educational settings. Practical Assessment, Research & Evaluation, 27 , 3.

Google Scholar  

Cook, T. D. (2008). “Waiting for life to arrive”: A history of the regression-discontinuity design in psychology, statistics and economics. Journal of Econometrics, 142 (2), 636–654. https://doi.org/10.1016/j.jeconom.2007.05.002

Cunningham, S. (2021). Causal inference: The mixtape. Yale University Press . https://doi.org/10.2307/j.ctv1c29t27

Diamond, A., & Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics, 95 (3), 932–945. https://doi.org/10.1162/REST_a_00318

Enders, C. K. (2022). Applied missing data analysis (2nd ed.). Guilford Press.

Feely, M., Seay, K. D., Lanier, P., Auslander, W., & Kohl, P. L. (2018). Measuring fidelity in research studies: A field guide to developing a comprehensive fidelity measurement system. Child and Adolescent Social Work Journal, 35 (2), 139–152. https://doi.org/10.1007/s10560-017-0512-6

Grimm, K. J., & McArdle, J. J. (2023). Latent curve modeling of longitudinal growth data. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (2nd ed., pp. 556–575). Guilford Press.

Hainmueller, J. (2012). Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies. Political Analysis, 20 (1), 25–46. https://doi.org/10.1093/pan/mpr025

Ho, D., Imai, K., King, G., & Stuart, E. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis, 15 (3), 199–236. https://doi.org/10.1093/pan/mpl013

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81 (396), 945–960. https://doi.org/10.2307/2289064

Huang, H., Cagle, P. J., Mazumdar, M., & Poeran, J. (2019). Statistics in brief: Instrumental variable analysis: An underutilized method in orthopaedic research. Clinical Orthopaedics and Related Research, 477 (7), 1750–1755. https://doi.org/10.1097/CORR.0000000000000729

Hughes, J. N., West, S. G., Kim, H., & Bauer, S. S. (2018). Effect of early grade retention on school completion: A prospective study. Journal of Educational Psychology, 110 (7), 974–991. https://doi.org/10.1037/edu0000243

Imai, K., & Ratkovic, M. (2014). Covariate balancing propensity score. Journal of the Royal Statistical Society: Series B (statistical Methodology), 76 (1), 243–263.

Imbens, G. W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142 (2), 615–635. https://doi.org/10.1016/j.jeconom.2007.05.001

Jacob, R., Zhu, P., Somers, M. A., & Bloom, H. (2012). A practical guide to regression discontinuity . MDRC.

Jennings, P. A., Brown, J. L., Frank, J. L., Doyle, S., Oh, Y., Davis, R., Rasheed, D., DeWeese, A., DeMauro, A. A., Cham, H., & Greenberg, M. T. (2017). Impacts of the CARE for teachers program on teachers’ social and emotional competence and classroom interactions. Journal of Educational Psychology, 109 (7), 1010–1028. https://doi.org/10.1037/edu0000187

Kang, J., Chan, W., Kim, M. O., & Steiner, P. M. (2016). Practice of causal inference with the propensity of being zero or one: Assessing the effect of arbitrary cutoffs of propensity scores. Communications for Statistical Applications and Methods, 23 (1), 1–20. https://doi.org/10.5351/CSAM.2016.23.1.001

Kang, J. D., & Schafer, J. L. (2007). Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical Science, 22 (4), 523–539. https://doi.org/10.1214/07-STS227

Kim, Y., & Steiner, P. (2016). Quasi-experimental designs for causal inference. Educational Psychologist, 51 (3–4), 395–405. https://doi.org/10.1080/00461520.2016.1207177

Kwok, O. M., West, S. G., & Green, S. B. (2007). The impact of misspecifying the within-subject covariance structure in multiwave longitudinal multilevel models: A Monte Carlo study. Multivariate Behavioral Research, 42 (3), 557–592. https://doi.org/10.1080/00273170701540537

Labrecque, J., & Swanson, S. A. (2018). Understanding the assumptions underlying instrumental variable analyses: A brief review of falsification strategies and related tools. Current Epidemiology Reports, 5 (3), 214–220. https://doi.org/10.1007/s40471-018-0152-1

Latif, E., & Miles, S. (2020). The impact of assignments and quizzes on exam grades: A difference-in-difference approach. Journal of Statistics Education, 28 (3), 289–294. https://doi.org/10.1080/10691898.2020.1807429

Lee, D. S., & Lemieux, T. (2010). Regression discontinuity designs in economics. Journal of Economic Literature, 48 (2), 281–355. https://doi.org/10.1257/jel.48.2.281

Lee, B. K., Lessler, J., & Stuart, E. A. (2010). Improving propensity score weighting using machine learning. Statistics in Medicine, 29 (3), 337–346. https://doi.org/10.1002/sim.3782

Leite, W. L., Jimenez, F., Kaya, Y., Stapleton, L. M., MacInnes, J. W., & Sandbach, R. (2015). An evaluation of weighting methods based on propensity scores to reduce selection bias in multilevel observational studies. Multivariate Behavioral Research, 50 (3), 265–284. https://doi.org/10.1080/00273171.2014.991018

Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). John Wiley & Sons.

Lousdal, M. L. (2018). An introduction to instrumental variable assumptions, validation and estimation. Emerging Themes in Epidemiology, 22 (15), 1–7. https://doi.org/10.1186/s12982-018-0069-7

Maynard, C., & Young, C. (2022). The results of using a traits-based rubric on the writing performance of third grade students. Texas Journal of Literacy Education, 9 (2), 102–128.

McCaffrey, D. F., Ridgeway, G., & Morral, A. R. (2004). Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods, 9 (4), 403–425. https://doi.org/10.1037/1082-989X.9.4.403

Neyman, J., Dabrowska, D. M., & Speed, T. P. (1990). On the application of probability theory to agricultural experiments: Essay on principles. Statistical Science, 5 (4), 465–472.

Nguyen, T. T., Tchetgen Tchetgen, E. J., Kawachi, I., Gilman, S. E., Walter, S., Liu, S. Y., Manly, J. J., & Glymour, M. M. (2016). Instrumental variable approaches to identifying the causal effect of educational attainment on dementia risk. Annals of Epidemiology, 26 (1), 71–76. https://doi.org/10.1016/j.annepidem.2015.10.006

Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). Cambridge University Press.

Book   Google Scholar  

Reichardt, C. S. (2019). Quasi-experimentation: A guide to design and analysis . Guilford Press.

Rubin, D. B. (2006). Matched sampling for causal effects . Cambridge University Press.

Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70 (1), 41–55. https://doi.org/10.1093/biomet/70.1.41

Roth, J., Sant’Anna, P. H., Bilinski, A., & Poe, J. (2023). What’s trending in difference-in-differences? A synthesis of the recent econometrics literature. Journal of Econometrics, 235 (2), 2218–2244. https://doi.org/10.1016/j.jeconom.2023.03.008

Sagarin, B. J., West, S. G., Ratnikov, A., Homan, W. K., Ritchie, T. D., & Hansen, E. J. (2014). Treatment noncompliance in randomized experiments: Statistical approaches and design issues. Psychological Methods, 19 (3), 317–333. https://doi.org/10.1037/met0000013

Schafer, J. L., & Kang, J. (2008). Average causal effects from nonrandomized studies: A practical guide and simulated example. Psychological Methods, 13 (4), 279–313. https://doi.org/10.1037/a0014268

Shadish, W., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference . Houghton Mifflin.

Steiner, P. M., Cook, T. D., Shadish, W. R., & Clark, M. H. (2010). The importance of covariate selection in controlling for selection bias in observational studies. Psychological Methods, 15 (3), 250–267. https://doi.org/10.1037/a0018719

Steiner, P. M., Shadish, W. R., & Sullivan, K. J. (2023). Frameworks for causal inference in psychological science. In H. Cooper, M. N. Coutanche, L. M. McMullen, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology: Foundations, planning, measures, and psychometrics (2nd ed., pp. 23–56). American Psychological Association.

Stuart, E. A., Huskamp, H. A., Duckworth, K., Simmons, J., Song, Z., Chernew, M. E., & Barry, C. L. (2014). Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Health Services and Outcomes Research Methodology, 14 , 166–182. https://doi.org/10.1007/s10742-014-0123-z

Suk, Y., Steiner, P. M., Kim, J. S., & Kang, H. (2022). Regression discontinuity designs with an ordinal running variable: Evaluating the effects of extended time accommodations for English-language learners. Journal of Educational and Behavioral Statistics, 47 (4), 459–484. https://doi.org/10.3102/10769986221090275

Tarr, A., & Imai, K. (2021). Estimating average treatment effects with support vector machines. arXiv preprint. https://arxiv.org/abs/2102.11926

Thoemmes, F. J., & West, S. G. (2011). The use of propensity scores for nonrandomized designs with clustered data. Multivariate Behavioral Research, 46 (3), 514–543. https://doi.org/10.1080/00273171.2011.569395

U.S. Department of Education (2022). What works clearinghouse: Procedures and standards handbook (Version 5.0). https://ies.ed.gov/ncee/wwc/Docs/referenceresources/Final_WWC-HandbookVer5_0-0-508.pdf

West, S. G., Cham, H., & Liu, Y. (2014). Causal inference and generalization in field settings: Experimental and quasi-experimental designs. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 49–80). Cambridge University Press.

Wong, V. C., Cook, T. D., Barnett, W. S., & Jung, K. (2008). An effectiveness-based evaluation of five state pre-kindergarten programs. Journal of Policy Analysis and Management: THe Journal of the Association for Public Policy Analysis and Management, 27 (1), 122–154. https://doi.org/10.1002/pam.20310

Wong, V. C., Wing, C., Steiner, P. M., Wong, M., & Cook, T. D. (2013). Research designs for program evaluation. In J. A. Schinka, W. F. Velicer, & I. B. Weiner (Eds.), Handbook of psychology: Research methods in psychology (2nd ed., pp. 316–341). John Wiley and Sons, Inc.

Download references

Acknowledgements

This research was supported by a R01 grant from the National Institute on Aging (NIA) (R01AG065110), R01 grants from the National Institute on Minority Health and Health Disparities (R01MD015763 and R01MD015715), and a R21 grant from the National Institute of Mental Health (R21MH124902). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Aging, National Institute on Minority Health and Health Disparities, or the National Institute of Mental Health. We thank Dr. Peter M. Steiner, Dr. Yongnam Kim, and the anonymous reviewers for their valuable comments and suggestions on the earlier draft of this paper.

Author information

Authors and affiliations.

Department of Psychology, Fordham University, 441 E. Fordham Road, Bronx, NY, 10461, USA

Heining Cham, Hyunjung Lee & Igor Migunov

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Heining Cham .

Ethics declarations

Conflict of interest.

All authors declare that they have no conflicts of interest.

Ethical approval

This research article does not involve any human participants or animal subjects. No data collection are involved.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cham, H., Lee, H. & Migunov, I. Quasi-experimental designs for causal inference: an overview. Asia Pacific Educ. Rev. (2024). https://doi.org/10.1007/s12564-024-09981-2

Download citation

Received : 01 June 2023

Revised : 05 June 2024

Accepted : 14 June 2024

Published : 26 June 2024

DOI : https://doi.org/10.1007/s12564-024-09981-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quasi-experiment
  • Regression discontinuity
  • Difference-in-differences
  • Interrupted time series
  • Instrumental variable
  • Propensity score
  • Find a journal
  • Publish with us
  • Track your research
  • Environment
  • Science & Technology
  • Business & Industry
  • Health & Public Welfare
  • Topics (CFR Indexing Terms)
  • Public Inspection
  • Presidential Documents
  • Document Search
  • Advanced Document Search
  • Public Inspection Search
  • Reader Aids Home
  • Office of the Federal Register Announcements
  • Using FederalRegister.Gov
  • Understanding the Federal Register
  • Recent Site Updates
  • Federal Register & CFR Statistics
  • Videos & Tutorials
  • Developer Resources
  • Government Policy and OFR Procedures
  • Congressional Review
  • My Clipboard
  • My Comments
  • My Subscriptions
  • Sign In / Sign Up
  • Site Feedback
  • Search the Federal Register

The Federal Register

The daily journal of the united states government.

  • Legal Status

This site displays a prototype of a “Web 2.0” version of the daily Federal Register. It is not an official legal edition of the Federal Register, and does not replace the official print version or the official electronic version on GPO’s govinfo.gov.

The documents posted on this site are XML renditions of published Federal Register documents. Each document posted on the site includes a link to the corresponding official PDF file on govinfo.gov. This prototype edition of the daily Federal Register on FederalRegister.gov will remain an unofficial informational resource until the Administrative Committee of the Federal Register (ACFR) issues a regulation granting it official legal status. For complete information about, and access to, our official publications and services, go to About the Federal Register on NARA's archives.gov.

The OFR/GPO partnership is committed to presenting accurate and reliable regulatory information on FederalRegister.gov with the objective of establishing the XML-based Federal Register as an ACFR-sanctioned publication in the future. While every effort has been made to ensure that the material on FederalRegister.gov is accurately displayed, consistent with the official SGML-based PDF version on govinfo.gov, those relying on it for legal research should verify their results against an official edition of the Federal Register. Until the ACFR grants it official status, the XML rendition of the daily Federal Register on FederalRegister.gov does not provide legal notice to the public or judicial notice to the courts.

Applications for New Awards; Stronger Connections Technical Assistance and Capacity Building Grant Program

A Notice by the Education Department on 06/26/2024

Document Details

Information about this document as published in the Federal Register .

Document Statistics

Published document.

This document has been published in the Federal Register . Use the PDF linked in the document sidebar for the official electronic format.

Enhanced Content - Table of Contents

This table of contents is a navigational tool, processed from the headings within the legal text of Federal Register documents. This repetition of headings to form internal navigation links has no substantive legal effect.

FOR FURTHER INFORMATION CONTACT:

Supplementary information:, full text of announcement, i. funding opportunity description, ii. award information, iii. eligibility information, iv. submission information, v. application review information, vi. award administration information, vii. other information, enhanced content - submit public comment.

  • This feature is not available for this document.

Enhanced Content - Read Public Comments

Enhanced content - sharing.

  • Email this document to a friend

Enhanced Content - Document Print View

  • Print this document

Enhanced Content - Document Tools

These tools are designed to help you understand the official document better and aid in comparing the online edition to the print edition.

These markup elements allow the user to see how the document follows the Document Drafting Handbook that agencies use to create their documents. These can be useful for better understanding how a document is structured but are not part of the published document itself.

Enhanced Content - Developer Tools

This document is available in the following developer friendly formats:.

  • JSON: Normalized attributes and metadata
  • XML: Original full text XML
  • MODS: Government Publishing Office metadata

More information and documentation can be found in our developer tools pages .

Official Content

  • View printed version (PDF)

This PDF is the current document as it appeared on Public Inspection on 06/25/2024 at 8:45 am. It was viewed 0 times while on Public Inspection.

If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register. Only official editions of the Federal Register provide legal notice of publication to the public and judicial notice to the courts under 44 U.S.C. 1503 & 1507 . Learn more here .

Office of Elementary and Secondary Education, Department of Education.

The Department of Education (Department) is issuing a notice inviting applications for fiscal year (FY) 2024 for new awards for the Stronger Connections Technical Assistance and Capacity Building (SCTAC) grant program.

Applications Available: June 26, 2024. Deadline for Transmittal of Applications: August 26, 2024. Deadline for Intergovernmental Review: October 24, 2024.

For the addresses for obtaining and submitting an application, please refer to our Common Instructions for Applicants to Department of Education Discretionary Grant Programs, published in the Federal Register on December 7, 2022 ( 87 FR 75045 ) and available at www.federalregister.gov/​documents/​2022/​12/​07/​2022-26554/​common-instructions-for-applicants-to-department-of-education-discretionary-grant-programs .

Hamed Negron-Perez, U.S. Department of Education, 400 Maryland Avenue SW, Room 4B111, Washington, DC 20202-6132. Telephone: (202) 219-1674. Email: [email protected] .

If you are deaf, hard of hearing, or have a speech disability and wish to access telecommunications relay services, please dial 7-1-1.

Purpose of Program: The purpose of the SCTAC grant program is to advance the mental health and well-being of early learners (as defined in this notice), school-age children and youth, and educators and other school staff, by making grants to State educational agencies (SEAs) to provide technical assistance and capacity building to high-need local educational agencies (LEAs) (as defined in this notice).

Assistance Listing Number (ALN): 84.424H.

OMB Control Number: 1894-0006.

Background: The Bipartisan Safer Communities Act (BSCA) allocated $1 billion in funding to States through the Stronger Connections Grant (SCG) program; SEAs, in turn, subgranted these funds competitively to high-need LEAs to design and enhance initiatives to promote safer, more inclusive, and positive school environments for all students, educators, and school staff including through personnel and programs to support student mental health.

The SCTAC grant program is being established with BSCA funds from the two percent reservation for technical assistance and capacity building under section 4103(a)(3) of the Elementary and Secondary Education Act of 1965, as amended (ESEA). This funding is available to SEAs to provide technical assistance and capacity building services to high-need LEAs for evidence-based (as defined in 34 CFR 77.1 ) and culturally and linguistically inclusive programs and activities related to mental health and well-being for early learners, school-age children and youth, and educators and other school staff. We encourage SEAs receiving SCTAC funds to prioritize high-need LEAs that did not receive a Stronger Connections subgrant from the SEA for technical assistance and capacity building services under this program.

“Raise the Bar: Lead the World” is the Department's call to action to transform education and unite around what works—based on decades of experience and research—to advance educational equity and excellence. As part of our Raise the Bar efforts to boldly improve learning conditions, the Department continues to invest in every student's mental health and well-being.

Recent studies show that children who experience unaddressed mental health issues are more likely to face challenges in school, such as being more likely to repeat a grade and experience chronic absenteeism, and less likely to graduate high school. [ 1 ] Amid the pandemic, data from the Centers for Disease Control and Prevention (CDC) showed that 1 in 3 high school students experienced poor mental health, 1 in 6 adolescents experienced a major depressive episode, and 20 percent of teens seriously considered suicide. [ 2 ] The suicide rate among Black youth similarly is increasing faster than for any other race or ethnic group. [ 3 ] Of teens seriously considering suicide, rates are alarmingly high for LGBTQ students, with 45% of LGBTQ youth surveyed indicating they seriously considered attempting suicide in the past year. [ 4 ]

These data are consistent with research findings about the mental health and well-being of early learners as well. According to the CDC, 17.4 percent of children aged 2-8 years had a diagnosed mental, behavior, or developmental disorder. [ 5 ] This same report showed an increase to 22 percent for children living below 100 percent of the Federal poverty level.

Educators and other school staff are also facing mental health and well-being challenges. According to the Department's National Center for Education Statistics February 2024 School Pulse Panel, 91 percent of public school principals or vice principals reported some level of concern about the mental health of the teachers or staff at their school and 41 percent reported being “moderately” or “extremely” concerned about this issue. [ 6 ]

Educator mental health and well-being carry implications for educator retention, and thus downstream effects on student educational opportunity and achievement, making it a critical priority for States and LEAs. A recent study found that 23 percent of teachers Start Printed Page 53408 said they were likely to leave their job by the end of the 2022-2023 school year and Black teachers, who are more likely to teach in under resourced schools without the necessary student and educator support, were significantly more likely to intend to leave than their peers. [ 7 ] The same study found that teachers who reported poor well-being as a reason for likely leaving their job were more likely than their counterparts to say that they intended to leave their job.

The SCTAC program is designed to build SEA capacity to address the particular needs of the high-need LEAs in their State. In responding to the areas identified in the absolute priority, we encourage projects that provide technical assistance and capacity building to high-need LEAs to address chronic absenteeism and increase student engagement and school belonging, for example, by implementing strong student connection and engagement activities or school climate improvement strategies. One evidence-based example that SEAs may consider, for example, is mentorship programs that focus on small-group counseling and help youth to build skills and competencies on choosing non-violent behaviors and using de-escalation and violence reduction strategies. [ 8 ] The Department is also interested in activities that enhance supportive services for youth impacted by community violence such as through trauma recovery, restorative practices, and community violence intervention and prevention strategies. For example, programs that use a trauma-informed approach to support social emotional wellbeing have been reported to decrease depression and increase self-confidence in participants. [ 9 ] When considering these different programs and activities, we encourage applicants to propose projects that include strategies specific to supporting young people, with a focus on those most historically underserved. [ 10 ]

Applicants may propose projects that also support the mental health, well-being, and academic development of early learners, for example, by providing technical assistance and capacity building services on how to remove barriers and increase access to social, emotional, and mental health supports; provide support to caregivers; strengthen family engagement activities; enhance home visits to encourage school and attendance readiness; and establish participatory approaches with families and community partners. [ 11 ]

We also welcome applications that propose to support educator mental health and well-being so that they are well positioned to support their students. For example, SEAs may consider proposing projects to better understand and address experiences, particularly in the school building, that impact educator mental health and well-being.

SEAs may also propose projects that provide technical assistance and capacity building to high-need LEAs on youth mental health programs that include peer-to-peer support programs, such as mental health “first aid” programs (as defined in this notice). Studies of youth mental health first aid have shown positive results in terms of providing youth peers, and adults who work closely with youth, the ability to recognize the signs, symptoms, and risk factors of mental health and substance use challenges. [ 12 ] Additionally, youth peer-to-peer support programs, such as peer counseling, youth mental health peer ambassadors, student-led clubs, and restorative justice programs, are additional promising practices. Broader studies of peer-to-peer programs show a variety of positive outcomes including reduced re-hospitalization rates, better quality of life outcomes, higher engagement rates, and improved whole health. [ 13 ]

These important activities can help high-need LEAs create safe, welcoming, and inclusive learning environments that support student mental health and wellbeing which is foundational to improving academic and other outcomes for all students.

This notice invites applications for SCTAC grants. The Department developed budget ranges for each potential applicant by ranking every State according to the State's share of their Stronger Connections Grant, Title IV, Part A funds (see the “Award Information” section of this notice for more information). SEAs should develop budgets that are appropriate to their proposed projects and consistent with the budget range established for their State. Department staff will review applications to determine if an SEA met the absolute priority, addressed the application requirements, and proposed a budget consistent with their State's established budget range. Peer reviewers will review applications to determine the extent to which applicants met the established selection criteria.

Priorities: This competition has one absolute priority. We are establishing this priority for this grant competition in accordance with section 437(d)(1) of the General Education Provisions Act (GEPA), 20 U.S.C. 1232(d)(1) .

Absolute Priority: For FY 2024 and any subsequent year in which we make awards from the list of unfunded applications from this competition, this priority is an absolute priority. Under 34 CFR 75.105(c)(3) , we consider only applications that meet this priority.

This priority is:

Projects to provide technical assistance and capacity building to high-need LEAs to support inclusive, evidence-based programs and activities related to mental health and well-being for early learners, school-age children and youth, or educators and other school staff.

To meet this priority, applicants must propose a project that would provide technical assistance and capacity building to high-need LEAs to help them establish or expand evidence-based, inclusive practices in one or more of the following areas:

(a) Student attendance and engagement programs designed to reduce rates of chronic absenteeism and improve attendance, engagement, connectedness, and wellbeing that include, for example:

(1) Increasing family engagement and communication, including through a variety of approaches to communication, such as through the use of texting to share real-time data on attendance, to more targeted engagement, such as through home visits to identify additional student and family supports that might be needed;

(2) Improving school climate and implementing anti-bullying efforts;

(3) Providing student mentorship programs, such as student success coaches and mentors, and supportive peer groups; Start Printed Page 53409

(4) Adopting early warning intervention systems and multi-tiered systems of support; and

(5) Establishing school and local educational agency attendance and engagement teams and providing them with real time and actionable data.

(b) Programs for early learners that support their mental health, well-being, and academic development through activities such as—

(1) Increasing access for early learners to social, emotional, and mental health supports, and reducing barriers to access for underserved students; and

(2) Building strong partnerships among parents, families, caregivers, social service organizations, mental health care personnel, personnel providing services to students served under section 619 of the Individuals with Disabilities Education Act (IDEA), and community-based organizations serving pre-kindergarten, kindergarten, and early grade students to improve the environment, relationships, engagement, attendance, and experiences that impact children's early development.

(c) Programs to improve educator and school staff mental health and wellbeing, so that these individuals may better support students and are more likely to remain in the profession, through activities such as—

(1) Developing methods, measurement tools, or interventions for high-need LEAs to understand, and to address the factors, including school-related factors, that impact educator mental health and well-being. This includes developing the methods and tools for disaggregating data by, for example, teacher race/ethnicity and years of experience), to get a complete understanding of the factors and who is impacted.

(2) Strengthening social, emotional, and behavioral competencies among adults;

(d) Peer-to-peer mental health or youth mental health programs supported by schools or qualified local organizations to reduce the impact of unaddressed mental health challenges such as those caused by exposure to community violence and to increase student belonging and connection, including, for example—

(1) Implementing peer-to-peer programs that raise awareness around core mental health concepts and destigmatize mental health care, provide training for students to identify protective  [ 14 ] and risk factors related to mental health and well-being, and connect students to resources and professionals for additional support; and

(2) Implementing youth mental health first aid programs to train students on how to identify, understand, and respond to signs of common mental health and well-being challenges.

(e) Improving data collection, use, and reporting as it relates to implementation and performance management of an SEA's SCG program.

Definitions: The following definitions apply to the FY 2024 SCTAC grant program competition and any subsequent year in which we make awards from the list of unfunded applications for this competition.

We are establishing definitions of “high-need LEA,” “early learner,” and “mental health first aid” in accordance with section 437(d)(1) of GEPA, 20 U.S.C. 1232(d)(1) . The definitions of “local educational agency” and “State educational agency” are from section 8101 of the ESEA ( 20 U.S.C. 7801 ). The definitions “baseline,” “demonstrates a rationale,” “evidence-based,” “experimental study,” “logic model,” “moderate evidence,” “project component,” “quasi-experimental design study,” “relevant outcome,” and “What Works Clearinghouse Handbooks (WWC Handbooks)” are from 34 CFR 77.1 . These definitions apply to the FY 2024 SCTAC grant program competition and any subsequent year in which we make awards from the list of unfunded applications for this competition.

Baseline means the starting point from which performance is measured and targets are set.

Demonstrates a rationale means a key project component included in the project's logic model is informed by research or evaluation findings that suggest the project component is likely to improve relevant outcomes.

Early learner means any person from birth to age 8 who is eligible for a free public education in the State.

Evidence-based means the proposed project component is supported by one or more of strong evidence, moderate evidence, promising evidence, or evidence that demonstrates a rationale.

Experimental study means a study that is designed to compare outcomes between two groups of individuals (such as students) that are otherwise equivalent except for their assignment to either a treatment group receiving a project component or a control group that does not. Randomized controlled trials, regression discontinuity design studies, and single-case design studies are the specific types of experimental studies that, depending on their design and implementation ( e.g., sample attrition in randomized controlled trials and regression discontinuity design studies), can meet What Works Clearinghouse (WWC) standards without reservations as described in the WWC Handbooks (as defined in this notice):

(i) A randomized controlled trial employs random assignment of, for example, students, teachers, classrooms, or schools to receive the project component being evaluated (the treatment group) or not to receive the project component (the control group).

(ii) A regression discontinuity design study assigns the project component being evaluated using a measured variable ( e.g., assigning students reading below a cutoff score to tutoring or developmental education classes) and controls for that variable in the analysis of outcomes.

(iii) A single-case design study uses observations of a single case ( e.g., a student eligible for a behavioral intervention) over time in the absence and presence of a controlled treatment manipulation to determine whether the outcome is systematically related to the treatment.

High-need LEA has the meaning ascribed it by the SEA under its Stronger Connections Grant program.

Local educational agency means a public board of education or other public authority legally constituted within a State for either administrative control or direction of, or to perform a service function for, public elementary schools or secondary schools in a city, county, township, school district, or other political subdivision of a State, or of or for a combination of school districts or counties that is recognized in a State as an administrative agency for its public elementary schools or secondary schools.

(a) The term includes any other public institution or agency having administrative control and direction of a public elementary school or secondary school.

(b) The term includes an elementary or secondary school funded by the Bureau of Indian Education (BIE) but only to the extent that including the school makes the school eligible for programs for which specific eligibility is not provided to the school in another provision of law and the school does not have a student population that is smaller than the student population of the LEA receiving assistance under the Start Printed Page 53410 ESEA with the smallest student population, except that the school shall not be subject to the jurisdiction of any SEA other than the BIE.

(c) The term includes educational service agencies and consortia of those agencies.

(d) The term includes the SEA in a State in which the SEA is the sole educational agency for all public schools.

Logic model (also referred to as a theory of action) means a framework that identifies key project components of the proposed project ( i.e., the active “ingredients” that are hypothesized to be critical to achieving the relevant outcomes) and describes the theoretical and operational relationships among the key project components and relevant outcomes.

Mental health first aid means the skills needed to recognize and respond to signs and symptoms of mental health and substance use challenges and know how to connect individuals to additional resources, including professional help.

Moderate evidence means that there is evidence of effectiveness of a key project component in improving a relevant outcome for a sample that overlaps with the populations or settings proposed to receive that component, based on a relevant finding from one of the following:

(i) A practice guide prepared by the WWC using version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks reporting a “strong evidence base” or “moderate evidence base” for the corresponding practice guide recommendation;

(ii) An intervention report prepared by the WWC using version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks reporting a “positive effect” or “potentially positive effect” on a relevant outcome based on a “medium to large” extent of evidence, with no reporting of a “negative effect” or “potentially negative effect” on a relevant outcome; or

(iii) A single experimental study (as defined in this notice) or quasi-experimental design study (as defined in this notice) reviewed and reported by the WWC using version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks, or otherwise assessed by the Department using version 4.1 of the WWC Handbook, as appropriate, and that—

(A) Meets WWC standards with or without reservations;

(B) Includes at least one statistically significant and positive ( i.e., favorable) effect on a relevant outcome;

(C) Includes no overriding statistically significant and negative effects on relevant outcomes reported in the study or in a corresponding WWC intervention report prepared under version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks; and

(D) Is based on a sample from more than one site ( e.g., State, county, city, school district, or postsecondary campus) and includes at least 350 students or other individuals across sites. Multiple studies of the same project component that each meet requirements in paragraphs (iii)(A), (B), and (C) of this definition may together satisfy this requirement.

Project component means an activity, strategy, intervention, process, product, practice, or policy included in a project. Evidence may pertain to an individual project component or to a combination of project components ( e.g., training teachers on instructional practices for English learners and follow-on coaching for these teachers).

Promising evidence means that there is evidence of the effectiveness of a key project component in improving a relevant outcome, based on a relevant finding from one of the following:

(i) A practice guide prepared by WWC reporting a “strong evidence base” or “moderate evidence base” for the corresponding practice guide recommendation;

(ii) An intervention report prepared by the WWC reporting a “positive effect” or “potentially positive effect” on a relevant outcome with no reporting of a “negative effect” or “potentially negative effect” on a relevant outcome; or

(iii) A single study assessed by the Department, as appropriate, that—

(A) Is an experimental study, a quasi-experimental design study, or a well-designed and well-implemented correlational study with statistical controls for selection bias ( e.g., a study using regression methods to account for differences between a treatment group and a comparison group); and

(B) Includes at least one statistically significant and positive ( i.e., favorable) effect on a relevant outcome.

Quasi-experimental design study means a study using a design that attempts to approximate an experimental study by identifying a comparison group that is similar to the treatment group in important respects. This type of study, depending on design and implementation ( e.g., establishment of baseline equivalence of the groups being compared), can meet WWC standards with reservations, but cannot meet WWC standards without reservations, as described in the WWC Handbooks.

Relevant outcome means the student outcome(s) or other outcome(s) the key project component is designed to improve, consistent with the specific goals of the program.

State educational agency (SEA) means the agency primarily responsible for the State supervision of public elementary schools and secondary schools.

Strong evidence means that there is evidence of the effectiveness of a key project component in improving a relevant outcome for a sample that overlaps with the populations and settings proposed to receive that component, based on a relevant finding from one of the following:

(i) A practice guide prepared by the WWC using version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks reporting a “strong evidence base” for the corresponding practice guide recommendation;

(ii) An intervention report prepared by the WWC using version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks reporting a “positive effect” on a relevant outcome based on a “medium to large” extent of evidence, with no reporting of a “negative effect” or “potentially negative effect” on a relevant outcome; or

(iii) A single experimental study reviewed and reported by the WWC using version 2.1, 3.0, 4.0, or 4.1 of the WWC Handbooks, or otherwise assessed by the Department using version 4.1 of the WWC Handbooks, as appropriate, and that—

(A) Meets WWC standards without reservations;

(D) Is based on a sample from more than one site ( e.g., State, county, city, school district, or postsecondary campus) and includes at least 350 students or other individuals across sites. Multiple studies of the same project component that each meet requirements in paragraphs (iii)(A), (B), and (C) of this definition may together satisfy the requirement in this paragraph (iii)(D).

What Works Clearinghouse Handbooks (WWC Handbooks) means the standards and procedures set forth in the WWC Standards Handbook, Versions 4.0 or 4.1, and WWC Procedures Handbook, Versions 4.0 or 4.1, or in the WWC Procedures and Standards Handbook, Version 3.0 or Version 2.1 (all incorporated by Start Printed Page 53411 reference, see § 77.2). Study findings eligible for review under WWC standards can meet WWC standards without reservations, meet WWC standards with reservations, or not meet WWC standards. WWC practice guides and intervention reports include findings from systematic reviews of evidence as described in the WWC Handbooks documentation.

Note: The What Works Clearinghouse Procedures and Standards Handbook (Version 4.1), as well as the more recent What Works Clearinghouse Handbooks released in August 2022 (Version 5.0), are available at https://ies.ed.gov/​ncee/​wwc/​Handbooks .

Application Requirements: We are establishing the following application requirements for the FY 2024 grant competition and any subsequent year in which we make awards from the list of unfunded applications for this competition, in accordance with section 437(d)(1) of GEPA, 20 U.S.C. 1232(d)(1) .

Applicants must include the following in their applications:

(1) A description of the criteria the SEA will use to identify the high-need LEAs that will receive technical assistance and capacity building services under this program.

(2) A plan ( i.e., description of key activities, milestones, timeline, resources, performance measures, and partnerships) for providing the proposed technical assistance and capacity building services to high-need LEAs.

(3) A plan for developing and disseminating the technical assistance and capacity building products and resources the SEA develops, as applicable.

Waiver of Proposed Rulemaking: Under the Administrative Procedure Act ( 5 U.S.C. 553 ), the Department generally offers interested parties the opportunity to comment on proposed priorities, requirements, and definitions. Section 437(d)(1) of GEPA, however, allows the Secretary to exempt from rulemaking requirements regulations governing the first grant competition under a new or substantially revised program authority. This is the first grant competition for this program under section 4103(a)(3) of the ESEA and therefore qualifies for this exemption. In order to ensure timely grant awards, the Secretary has decided to forgo public comment on the priority, requirements, and definitions under section 437(d)(1) of GEPA. These requirements and definitions will apply to the FY 2024 grant competition and any subsequent year in which we make awards from the list of unfunded applications from this competition.

Program Authority: Section 4103(a)(3) of the ESEA; Public Law 117-159 (enacted June 25, 2022), Bipartisan Safer Communities Act, Division B, Title II, School Improvement Programs.

Applicable Regulations: (a) The Education Department General Administrative Regulations in 34 CFR parts 75 , 77 , 79 , 81 , 82 , 84 , 97 , 98 , and 99 . (b) The Office of Management and Budget (OMB) Guidelines to Agencies on Governmentwide Debarment and Suspension (Nonprocurement) in 2 CFR part 180 , as adopted and amended as regulations of the Department in 2 CFR part 3485 . (c) The Guidance for Federal Financial Assistance in 2 CFR part 200 , as adopted and amended as regulations of the Department in 2 CFR part 3474 .

Note: The Department will implement the provisions included in the OMB final rule, OMB Guidance for Federal Financial Assistance, which amends 2 CFR parts 25 , 170 , 175 , 176 , 180 , 182 , 183 , 184 , and 200 , on October 1, 2024. Grant applicants that anticipate a performance period start date on or after October 1, 2024 should follow the provisions stated in the OMB Guidance for Federal Financial Assistance ( 89 FR 30046 ) when preparing an application. For more information about these updated regulations please visit: https://www.cfo.gov/​resources/​uniform-guidance/​ .

Note: The regulations in 34 CFR part 79 apply to all applicants except federally recognized Indian Tribes.

Type of Award: Discretionary grants.

Available Funds: $10,930,000.

Project Period: Up to 36 months. Budgets should be developed for a single project period of up to 36 months.

Maximum Awards: An SEA may initially request no more than the maximum amount (as noted below in the designated category ranges) for its project period. If funds remain available after funding each successful applicant at its requested amount, the Department may, to the extent appropriate, increase the awards for successful applicants. If available funds are insufficient to fully award each successful applicant at its requested amount, the Department will ratably reduce the awards for all successful applicants. The budget ranges are as follows:

Category 1—$500,000-$1,000,000: California, Texas, New York, Florida.

Category 2—$250,000-$500,000: Illinois, Pennsylvania, Georgia, Ohio, North Carolina, Michigan, New Jersey.

Category 3—$120,000-250,000: Arizona, Louisiana, Tennessee, Virginia, Maryland, South Carolina, Alabama, Kentucky, Indiana, Washington, Missouri, Massachusetts, Mississippi, Wisconsin, Oklahoma.

Category 4—$60,000-$150,000: Arkansas, Minnesota, Colorado, Nevada, Connecticut, Oregon, New Mexico, Kansas, Iowa, West Virginia.

Category 5—$50,000-$100,000: Alaska, Delaware, Hawaii, Idaho, Maine, Montana, Nebraska, New Hampshire, North Dakota, Rhode Island, South Dakota, Utah, Vermont, Wyoming, Bureau of Indian Education, District of Columbia, Puerto Rico.

Category 6—$25,000-$50,000: The Outlying Areas of Guam, American Samoa, the Northern Mariana Islands, the United States Virgin Islands.

1. Eligible Applicants: SEAs, as defined in 20 U.S.C. 7801(49) ; and the Bureau of Indian Education. (Section 437(d)(1) of GEPA)

2. a. Cost Sharing or Matching: This program does not require cost sharing or matching.

b. Supplement-Not-Supplant: This competition involves supplement-not-supplant funding requirements. Grantees must use SCTAC funds to supplement, and not supplant, other non-Federal funds that would otherwise be used to pay for activities authorized under the SCTAC program.

c. Indirect Cost Rate Information: This program uses a restricted indirect cost rate. For more information regarding indirect costs, or to obtain a negotiated indirect cost rate, please see www2.ed.gov/​about/​offices/​list/​ocfo/​intro.html .

d. Administrative Cost Limitation: This program does not include any program-specific limitation on administrative expenses. All administrative expenses must be reasonable and necessary and conform to Cost Principles described in 2 CFR part 200 subpart E of the Uniform Guidance.

3. Subgrantees: A grantee under this competition may not award subgrants to entities to directly carry out project activities described in its application.

4. Equitable Services: (a) Grantees must ensure that equitable services are provided to eligible students and teachers in non-public schools as required under section 8501 of the ESEA, including through timely and meaningful consultation with representatives of non-public schools.

(b) The SEA must ensure that a public agency will maintain control of SCTAC funds used to provide services and assistance to non-public school students and teachers.

(c) The SEA must ensure that a public agency will have title to materials, Start Printed Page 53412 equipment, and property purchased with SCTAC funds.

(d) The SEA must ensure that services to non-public school students and teachers with SCTAC funds will be provided by a public agency directly, or through contract with, another public or private entity.

Note: This section (4) is not applicable to the BIE.

5. Funding Restrictions: We reference regulations outlining funding restrictions in the Applicable Regulations section of this notice. In addition, we remind applicants that sections 4001(a) and 4001(b) of the ESEA ( 20 U.S.C. 7101 ) apply to this program. Section 4001(a) requires entities receiving funds under this program to obtain prior, written, informed consent from the parent of each child who is under 18 years of age to participate in any mental-health assessment or service that is funded under this program and conducted in connection with an elementary or secondary school. Section 4001(b) prohibits the use of funds for medical services or drug treatment or rehabilitation, except for integrated student supports, specialized instructional support services, or referral to treatment for impacted students, which may include students who are victims of, or witnesses to, crime or who illegally use drugs. This prohibition does not preclude the use of funds to support mental health counseling and support services, including those provided by a mental health services provider outside of school, so long as such services are not medical.

1. Application Submission Instructions: Applicants are required to follow the Common Instructions for Applicants to Department of Education Discretionary Grant Programs, published in the Federal Register on December 7, 2022 ( 87 FR 75045 ) and available at https://www.federalregister.gov/​documents/​2022/​12/​07/​2022-26554/​common-instructions-for-applicants-to-department-of-education-discretionary-grant-programs , which contain requirements and information on how to submit an application.

2. Intergovernmental Review: This program is subject to Executive Order 12372 and the regulations in 34 CFR part 79 . Information about Intergovernmental Review of Federal Programs under Executive Order 12372 is in the application package for this competition.

3. Recommended Page Limit: The project narrative is where you, the applicant, address the absolute priority and application requirements. We recommend that you (1) limit the application narrative to the equivalent of no more than 10 pages and (2) use the following standards:

  • A “page” is 8.5″ × 11″, on one side only, with 1″ margins at the top, bottom, and both sides.
  • Double space (no more than three lines per vertical inch) all text in the application narrative, including titles, headings, footnotes, quotations, references, and captions, as well as all text in charts, tables, figures, and graphs.
  • Use a font that is either 12 point or larger or no smaller than 10 pitch (characters per inch).
  • Use one of the following fonts: Times New Roman, Courier, Courier New, or Arial.

The recommended page limit applies to the project narrative.

1. Selection Criteria: The selection criteria for this program are from 34 CFR 75.210 . The maximum score for all selection criteria is 100 points. The points assigned to each criterion are indicated in parentheses. Non-Federal peer reviewers will evaluate and score each application program narrative against the following selection criteria:

(a) Quality of the project design (up to 60 points)

The Secretary considers the quality of the design of the proposed project. In determining the quality of the design of the proposed project, the Secretary considers the following factors:

(1) The extent to which the goals, objectives, and outcomes to be achieved by the proposed project are clearly specified and measurable. (Up to 30 points)

(2) The extent to which the design of the proposed project is appropriate to, and will successfully address, the needs of the target population or other identified needs. (Up to 30 points)

(b) Quality of the management plan (Up to 30 points)

The Secretary considers the quality of the management plan for the proposed project. In determining the quality of the management plan for the proposed project, the Secretary considers the adequacy of the management plan to achieve the objectives of the proposed project on time and within budget, including clearly defined responsibilities, timelines, and milestones for accomplishing project tasks.

(c) Adequacy of resources (Up to 10 points)

The Secretary considers the adequacy of the resources for the proposed project. In determining the adequacy of resources for the proposed project, the Secretary considers the potential for continued support of the project after Federal funding ends, including, as appropriate, the demonstrated commitment of appropriate entities to such support.

2. Review and Selection Process: Non-Federal peer reviewers will review applications to determine the extent to which the applications address the selection criteria.

We remind potential applicants that, in reviewing applications in any discretionary grant competition, the Secretary may consider, under 34 CFR 75.217(d)(3) , the past performance of the applicant in carrying out a previous award, such as the applicant's use of funds, achievement of project objectives, and compliance with grant conditions. The Secretary may also consider whether the applicant failed to submit a timely performance report or submitted a report of unacceptable quality.

In addition, in making a competitive grant award, the Secretary requires various assurances including those applicable to Federal civil rights laws that prohibit discrimination in programs or activities receiving Federal financial assistance from the Department ( 34 CFR 100.4 , 104.5 , 106.4 , 108.8 , and 110.23 ).

3. Risk Assessment and Specific Conditions: Consistent with 2 CFR 200.205 , before awarding grants under this program, the Department conducts a review of the risks posed by applicants. Under 2 CFR 3474.10 , the Secretary may impose specific conditions, and, in appropriate circumstances, high-risk conditions on a grant if the applicant or grantee is not financially stable; has a history of unsatisfactory performance; has a financial or other management system that does not meet the standards in 2 CFR part 200, subpart D ; has not fulfilled the conditions of a prior grant; or is otherwise not responsible.

4. Integrity and Performance System: If you receive an award under this grant program that, over the course of the project period, may exceed the simplified acquisition threshold (currently $250,000), under 2 CFR 200.205(a)(2) , we must make a judgment about your integrity, business ethics, and record of performance under Federal awards—that is, the risk posed by you as an applicant—before we make an award. In doing so, we must consider any information about you that is in the Start Printed Page 53413 integrity and performance system (currently referred to as the Federal Awardee Performance and Integrity Information System (FAPIIS)), accessible through the System for Award Management. You may review and comment on any information about yourself that a Federal agency previously entered and that is currently in FAPIIS.

Please note that, if the total value of your currently active grants, cooperative agreements, and procurement contracts from the Federal Government exceeds $10,000,000, the reporting requirements in 2 CFR part 200, Appendix XII , require you to report certain integrity information to FAPIIS semiannually. Please review the requirements in 2 CFR part 200, Appendix XII , if this grant plus all the other Federal funds you receive exceed $10,000,000.

5. In General: In accordance with the Guidance for Federal Financial Assistance located at 2 CFR part 200 , all applicable Federal laws, and relevant Executive guidance, the Department will review and consider applications for funding pursuant to this notice inviting applications in accordance with:

(a) Selecting recipients most likely to be successful in delivering results based on the program objectives through an objective process of evaluating Federal award applications ( 2 CFR 200.205 );

(b) Prohibiting the purchase of certain telecommunication and video surveillance services or equipment in alignment with section 889 of the National Defense Authorization Act of 2019 ( Pub. L. 115-232 ) ( 2 CFR 200.216 );

(c) Providing a preference, to the extent permitted by law, to maximize use of goods, products, and materials produced in the United States ( 2 CFR 200.322 ); and

(d) Terminating agreements in whole or in part to the greatest extent authorized by law if an award no longer effectuates the program goals or agency priorities ( 2 CFR 200.340 ).

1. Award Notices: If your application is successful, we notify your U.S. Representative and U.S. Senators and send you a Grant Award Notification (GAN), or we may send you an email containing a link to access an electronic version of your GAN. We also may notify you informally.

If your application is not evaluated or not selected for funding, we notify you.

2. Administrative and National Policy Requirements: We identify administrative and national policy requirements in the application package and reference these and other requirements in the Applicable Regulations section of this notice. We reference the regulations outlining the terms and conditions of a grant in the Applicable Regulations section of this notice. The Grant Award Notification (GAN) also incorporates your approved application as part of your binding commitments under the grant.

3. Reporting: (a) If you apply for a grant under this competition, you must ensure that you have in place the necessary processes and systems to comply with the reporting requirements in 2 CFR part 170 should you receive funding. This does not apply if you have an exception under 2 CFR 170.110(b) .

(b) At the end of your project period, you must submit a final performance report, including financial information, as directed by the Secretary. The Secretary may also require more frequent performance reports under 34 CFR 75.720(c) . For specific requirements on reporting, please go to www.ed.gov/​fund/​grant/​apply/​appforms/​appforms.html .

4. Performance Measures: For the purpose of Department reporting under 34 CFR 75.110 , we have established the following performance measures for the SCTAC grant program:

(a) The number of technical assistance and capacity-building services provided to assist high-need LEAs.

(b) The number and percentage of high-need LEAs reporting that the technical assistance provided was high-quality, relevant, and useful.

(c) The number and percentage of high-need LEAs reporting an increase in capacity as a result of technical assistance and capacity building services provided.

These measures constitute the Department's indicators of success for this program. Consequently, we advise an applicant for a grant under this program to consider these measures in conceptualizing the approach and evaluation for its proposed project. Each grantee must provide, in its performance reports, data about its progress in meeting these measures.

Consistent with 34 CFR 75.591 , grantees funded under this program must comply with the requirements of any evaluation of the program conducted by the Department or an evaluator selected by the Department.

Accessible Format: On request to the program contact person listed under FOR FURTHER INFORMATION CONTACT , individuals with disabilities can obtain this document and a copy of the application package in an accessible format. The Department will provide the requestor with an accessible format that may include Rich Text Format (RTF) or text format (txt), a thumb drive, an MP3 file, braille, large print, audiotape, compact disc, or other accessible format.

Electronic Access to This Document: The official version of this document is the document published in the Federal Register . You may access the official edition of the Federal Register and the Code of Federal Regulations at www.govinfo.gov . At this site, you can view this document, as well as all other Department documents published in the Federal Register , in text or PDF. To use PDF, you must have Adobe Acrobat Reader, which is available free at the site.

You may also access Department documents published in the Federal Register by using the article search feature at www.federalregister.gov . Specifically, through the advanced search feature at this site, you can limit your search to documents published by the Department.

Adam Schott,

Principal Deputy Assistant Secretary for Policy and Programs, Delegated the Authority to Perform the Functions and Duties of the Assistant Secretary, Office of Elementary and Secondary Education.

1.  Larson, S., Chapman, S., Spetz, J., & Brindis, C.D. Chronic childhood trauma, mental health, academic achievement, and school-based health center mental health services. J Sch Health. 2017; 87: 675-686. Retrieved from: https://escholarship.org/​content/​qt6th2r852/​qt6th2r852.pdf .

2.  Youth Risk Behavior Survey: Data Summary and Trends Report. Centers for Disease Control and Prevention 2011-2021. Retrieved from: https://www.cdc.gov/​healthyyouth/​data/​yrbs/​pdf/​YRBS_​Data-Summary-Trends_​Report2023_​508.pdf .

3.  Akkas, Faranza, A. Corr. Black Adolescent Suicide Rate Reveals Urgent Need to Address Mental Health Care Barriers. April 2024. Retrieved from: https://www.pewtrusts.org/​en/​research-and-analysis/​articles/​2024/​04/​22/​black-adolescent-suicide-rate-reveals-urgent-need-to-address-mental-health-care-barriers .

4.   https://www.thetrevorproject.org/​survey-2022/​ .

5.  Cree, R.A., Bitsko, R.H., Robinson, L.R., et al. Health Care, Family, and Community Factors Associated with Mental, Behavioral, and Developmental Disorders and Poverty Among Children Aged 2-8 Years—United States, 2016. MMWR Morb Mortal Wkly Rep 2018;67:1377-1383. Retrieved from: http:\\dx.doi.org/10.15585/mmwr.mm6750a1 .

6.  U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, School Pulse Panel. February 2024. Retrieved from: https://nces.ed.gov/​surveys/​spp/​results.asp .

7.  Doan, S., Steiner, E., Pandey, R., & Woo, A. (June 2023). Teacher Well-Being and Intentions to Leave: Findings from the 2023 State of the American Teacher Survey. Rand Corporation. Retrieved from: https://www.rand.org/​content/​dam/​rand/​pubs/​research_​reports/​RRA1100/​RRA1108-8/​RAND_​RRA1108-8.pdf .

8.  Centers for Disease Control and Prevention. Retrieved from: https://www.cdc.gov/​youth-violence/​prevention/​index.html .

9.   https://www.youth-guidance.org/​wow/​ .

10.  Center for Disease Control and Prevention. Retrieved from: https://www.cdc.gov/​vitalsigns/​aces/​index.html .

11.  Substance Abuse and Mental Health Services Administration. Retrieved from: https://www.samhsa.gov/​early-childhood-mental-health-programs .

12.  Research and Evaluation of Youth Mental Health First Aid. National Council for Mental Wellbeing. Retrieved from: https://www.mentalhealthfirstaid.org/​wp-content/​uploads/​2024/​03/​2024.01.05_​YouthMHFA_​Research-One-pager-1.pdf .

13.  Evidence for Peer Support (2018). Mental Health America. Retrieved from: https://www.mhanational.org/​sites/​default/​files/​Evidence%20for%20Peer%20Support%20May%202018.pdf .

14.  A characteristic at the biological, psychological, family, or community (including peers and culture) level that is associated with a lower likelihood of problem outcomes or that reduces the negative impact of a risk factor on problem outcomes. Retrieved from: https://youth.gov/​youth-topics/​youth-mental-health/​risk-and-protective-factors-youth .

[ FR Doc. 2024-14000 Filed 6-25-24; 8:45 am]

BILLING CODE 4000-01-P

  • Executive Orders

Reader Aids

Information.

  • About This Site
  • Accessibility
  • No Fear Act
  • Continuity Information

IMAGES

  1. True Eperimental and Quasi Experimental Design

    example of quasi experimental research pdf

  2. Quasi Experimental Design Psychology Definition

    example of quasi experimental research pdf

  3. (PDF) Experimental and quasi-experimental designs for guidelines

    example of quasi experimental research pdf

  4. (PDF) Experimental and quasi-experimental research in information

    example of quasi experimental research pdf

  5. (PDF) The Use and Interpretation of Quasi-Experimental Studies in

    example of quasi experimental research pdf

  6. 5 Quasi-Experimental Design Examples (2024)

    example of quasi experimental research pdf

VIDEO

  1. Chapter 5. Alternatives to Experimentation: Correlational and Quasi Experimental Designs

  2. Chapter 4: Experimental & Quasi-Experimental Research

  3. Chapter 12: Quasi Experimental Research Designs

  4. Types of Quasi Experimental Research Design

  5. Introduction to quasi-experimental designs (QEDs)

  6. Quantitative Research Designs

COMMENTS

  1. (PDF) Quasi-Experimental Research Designs

    Quasi-experimental research designs are the most widely used research approach employed to evaluate the outcomes of social work programs and policies. This new volume describes the logic, design ...

  2. PDF A Quasi-Experimental Research on the Educational Value of Performance

    experimental group participated in the performance assessment based teaching-learning activities for nine weeks. After nine weeks, three post-tests were administered to both the experimental group and control group (More details about the quasi-experimental research procedure may be found within the fourth portion of this section). Table 1.

  3. PDF QUASI-EXPERIMENTAL or AND SINGLE-CASE EXPERIMENTAL post, DESIGNScopy

    of quasi-experimental research design. In this chapter, we separate the content into two major sections: quasi-experimental designs and single-case experimental designs. We begin this chapter with an introduction to the type of research design illustrated here: the quasi-experimental research design. 13.1 An Overview of Quasi-Experimental Designs

  4. PDF Quasi-Experimental Designs

    Quasi-experimental designs (QED) can still help researchers understand the impacts of a policy or program. What makes a QED "quasi" is the fact that instead of randomly assigning subjects to intervention and control groups, they are split by some other means. Two groups are formed through various, non-random processes.

  5. PDF Designing and Conducting Strong Quasi-Experiments in Education

    Best Practices in Designing and Implementing a QED. A key principle in designing quasi-experiments is that the more the quasi-experiment is like a true experiment, the stronger its validity (Rosenbaum 2010). In an experiment, random assignment creates both (1) a treatment group and (2) a control group that is the treatment group's mirror image.

  6. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  7. PDF Experimental and quasi-experimental designs

    to as the treatment. Researchers typically draw upon either experimental or quasi-experimental research designs to determine whether there is a causal relationship between the treatment and the outcome. This chapter outlines key features and provides examples of common experimental and quasi-experimental research designs. We also make

  8. 14

    In this chapter, we discuss the logic and practice of quasi-experimentation. Specifically, we describe four quasi-experimental designs - one-group pretest-posttest designs, non-equivalent group designs, regression discontinuity designs, and interrupted time-series designs - and their statistical analyses in detail.

  9. PDF Quasi-Experimental Design and Methods

    Quasi-experimental research designs, like experimental designs, test causal hypotheses. A quasi-experimental design by definition lacks random assignment. Quasi-experimental designs identify a comparison group that is as similar as possible to the treatment group in terms of baseline (pre-intervention) characteristics.

  10. Use of Quasi-Experimental Research Designs in Education Research

    Strong emphasis on an evidence-based approach to policy and interventions by the government alongside corresponding. fiGURE 1 number and Proportion of articles Using Quasi-Experimental Research Designs Between 1995 and 2018 in 15 Education Journals. demand from grant-making agencies have also led to the rapid growth of QEDs in education research.

  11. PDF Quasi- experimental Designs

    Quasi-experiments should not be seen, however, as always inferior to true exper-iments. Sometimes quasi-experiments are the next logical step in a long research process where laboratory-based experimental fi ndings need to be tested in practical situations to see if the fi ndings are really useful. Laboratory-based experiments

  12. (PDF) Experimental and quasi-experimental designs

    Experimental and quasi-experimental research designs examine whether there is a causal. relationship between independent and dependent variables. Simply de ned, the independent. variable is the ...

  13. PDF Quasi-Experimental Designs

    In this reading, we'll discuss five quasi-experimental approaches: 1) matching, 2) mixed designs, 3) single-subject designs, and 4) developmental designs. (b) are plausible causes of the dependent variable. Quasi-experiments are designed to reduce confounding variables as much as possible, given that random assignment is not available.

  14. PDF Chapter 10, Quasi-experimental Designs

    Many ecological field studies must use quasi-experimental designs in which the 'treatments' are specified by geography, geology, or disturbances like fires. The thrust of this chapter is to outline briefly the problems that arise with quasi-experimental designs and the limitations they impose on scientific inference.

  15. 7.3 Quasi-Experimental Research

    Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one. The prefix quasi means "resembling.". Thus quasi-experimental research is research that resembles experimental research but is not true experimental research.

  16. PDF EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH

    Research on Teaching, published by Rand McNally & Company in 1963, under the longer tide "Experimental and Quasi-Experimental Designs for Research on Teaching." As a result, the introductory pages and many of the illustrations come from educational research. But as a study of the references will indicat<:,the survey. draws from the social sciences

  17. (PDF) Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  18. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    shows examples of statistics that may be used to answer these two questions. TIP When you read a study, first read the abstract to determine whether there is an intervention. If so, the study is either experimental or quasi-experimental. If not, the study will fit into one of the other categories.

  19. PDF CHAPTER 5 Experimental and Quasi-Experimental Designs for Research

    of 16 experimental designs against 12 com­ mon threats to valid inference. By experi­ ment we refer to that portion of research in which variables are manipulated and their effects upon other variables observed. It is well to distinguish the particular role of this chapter. It is not a chapter on experimental design in the Fisher (1925, 1935 ...

  20. The Use and Interpretation of Quasi-Experimental Studies in Medical

    In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical ...

  21. Use of Quasi-Experimental Research Designs in Education Research

    The increasing use of quasi-experimental research designs (QEDs) in education, brought into focus following the "credibility revolution" (Angrist & Pischke, 2010) in economics, which sought to use data to empirically test theoretical assertions, has indeed improved causal claims in education (Loeb et al., 2017).However, more recently, scholars, practitioners, and policymakers have ...

  22. Quasi-experimental designs for causal inference: an overview

    The randomized control trial (RCT) is the primary experimental design in education research due to its strong internal validity for causal inference. However, in situations where RCTs are not feasible or ethical, quasi-experiments are alternatives to establish causal inference. This paper serves as an introduction to several quasi-experimental designs: regression discontinuity design ...

  23. PDF Designing and Conducting Experimental and Quasi-Experimental Research

    Basic Concepts of Experimental and Quasi-Experimental Research Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder.

  24. PDF Quasi-Experimental Evaluation Designs

    Quasi-experimental research designs, like experimental designs, assess the whether an intervention can determine program impacts. ... Example: A state agency uses foster parent assessments of children in their care using the Child Behavior Checklist (CBCL) to assign children in foster homes to a wraparound services program intended to prevent ...

  25. Federal Register :: Applications for New Awards; Stronger Connections

    (A) Is an experimental study, a quasi-experimental design study, or a well-designed and well-implemented correlational study with statistical controls for selection bias ( e.g., a study using regression methods to account for differences between a treatment group and a comparison group); and