Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

quasi experimental tagalog

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Research Method

Home » Quasi-Experimental Research Design – Types, Methods

Quasi-Experimental Research Design – Types, Methods

Table of Contents

Quasi-Experimental Design

Quasi-Experimental Design

Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable(s) that is available in a true experimental design.

In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to the experimental and control groups. Instead, the groups are selected based on pre-existing characteristics or conditions, such as age, gender, or the presence of a certain medical condition.

Types of Quasi-Experimental Design

There are several types of quasi-experimental designs that researchers use to study causal relationships between variables. Here are some of the most common types:

Non-Equivalent Control Group Design

This design involves selecting two groups of participants that are similar in every way except for the independent variable(s) that the researcher is testing. One group receives the treatment or intervention being studied, while the other group does not. The two groups are then compared to see if there are any significant differences in the outcomes.

Interrupted Time-Series Design

This design involves collecting data on the dependent variable(s) over a period of time, both before and after an intervention or event. The researcher can then determine whether there was a significant change in the dependent variable(s) following the intervention or event.

Pretest-Posttest Design

This design involves measuring the dependent variable(s) before and after an intervention or event, but without a control group. This design can be useful for determining whether the intervention or event had an effect, but it does not allow for control over other factors that may have influenced the outcomes.

Regression Discontinuity Design

This design involves selecting participants based on a specific cutoff point on a continuous variable, such as a test score. Participants on either side of the cutoff point are then compared to determine whether the intervention or event had an effect.

Natural Experiments

This design involves studying the effects of an intervention or event that occurs naturally, without the researcher’s intervention. For example, a researcher might study the effects of a new law or policy that affects certain groups of people. This design is useful when true experiments are not feasible or ethical.

Data Analysis Methods

Here are some data analysis methods that are commonly used in quasi-experimental designs:

Descriptive Statistics

This method involves summarizing the data collected during a study using measures such as mean, median, mode, range, and standard deviation. Descriptive statistics can help researchers identify trends or patterns in the data, and can also be useful for identifying outliers or anomalies.

Inferential Statistics

This method involves using statistical tests to determine whether the results of a study are statistically significant. Inferential statistics can help researchers make generalizations about a population based on the sample data collected during the study. Common statistical tests used in quasi-experimental designs include t-tests, ANOVA, and regression analysis.

Propensity Score Matching

This method is used to reduce bias in quasi-experimental designs by matching participants in the intervention group with participants in the control group who have similar characteristics. This can help to reduce the impact of confounding variables that may affect the study’s results.

Difference-in-differences Analysis

This method is used to compare the difference in outcomes between two groups over time. Researchers can use this method to determine whether a particular intervention has had an impact on the target population over time.

Interrupted Time Series Analysis

This method is used to examine the impact of an intervention or treatment over time by comparing data collected before and after the intervention or treatment. This method can help researchers determine whether an intervention had a significant impact on the target population.

Regression Discontinuity Analysis

This method is used to compare the outcomes of participants who fall on either side of a predetermined cutoff point. This method can help researchers determine whether an intervention had a significant impact on the target population.

Steps in Quasi-Experimental Design

Here are the general steps involved in conducting a quasi-experimental design:

  • Identify the research question: Determine the research question and the variables that will be investigated.
  • Choose the design: Choose the appropriate quasi-experimental design to address the research question. Examples include the pretest-posttest design, non-equivalent control group design, regression discontinuity design, and interrupted time series design.
  • Select the participants: Select the participants who will be included in the study. Participants should be selected based on specific criteria relevant to the research question.
  • Measure the variables: Measure the variables that are relevant to the research question. This may involve using surveys, questionnaires, tests, or other measures.
  • Implement the intervention or treatment: Implement the intervention or treatment to the participants in the intervention group. This may involve training, education, counseling, or other interventions.
  • Collect data: Collect data on the dependent variable(s) before and after the intervention. Data collection may also include collecting data on other variables that may impact the dependent variable(s).
  • Analyze the data: Analyze the data collected to determine whether the intervention had a significant impact on the dependent variable(s).
  • Draw conclusions: Draw conclusions about the relationship between the independent and dependent variables. If the results suggest a causal relationship, then appropriate recommendations may be made based on the findings.

Quasi-Experimental Design Examples

Here are some examples of real-time quasi-experimental designs:

  • Evaluating the impact of a new teaching method: In this study, a group of students are taught using a new teaching method, while another group is taught using the traditional method. The test scores of both groups are compared before and after the intervention to determine whether the new teaching method had a significant impact on student performance.
  • Assessing the effectiveness of a public health campaign: In this study, a public health campaign is launched to promote healthy eating habits among a targeted population. The behavior of the population is compared before and after the campaign to determine whether the intervention had a significant impact on the target behavior.
  • Examining the impact of a new medication: In this study, a group of patients is given a new medication, while another group is given a placebo. The outcomes of both groups are compared to determine whether the new medication had a significant impact on the targeted health condition.
  • Evaluating the effectiveness of a job training program : In this study, a group of unemployed individuals is enrolled in a job training program, while another group is not enrolled in any program. The employment rates of both groups are compared before and after the intervention to determine whether the training program had a significant impact on the employment rates of the participants.
  • Assessing the impact of a new policy : In this study, a new policy is implemented in a particular area, while another area does not have the new policy. The outcomes of both areas are compared before and after the intervention to determine whether the new policy had a significant impact on the targeted behavior or outcome.

Applications of Quasi-Experimental Design

Here are some applications of quasi-experimental design:

  • Educational research: Quasi-experimental designs are used to evaluate the effectiveness of educational interventions, such as new teaching methods, technology-based learning, or educational policies.
  • Health research: Quasi-experimental designs are used to evaluate the effectiveness of health interventions, such as new medications, public health campaigns, or health policies.
  • Social science research: Quasi-experimental designs are used to investigate the impact of social interventions, such as job training programs, welfare policies, or criminal justice programs.
  • Business research: Quasi-experimental designs are used to evaluate the impact of business interventions, such as marketing campaigns, new products, or pricing strategies.
  • Environmental research: Quasi-experimental designs are used to evaluate the impact of environmental interventions, such as conservation programs, pollution control policies, or renewable energy initiatives.

When to use Quasi-Experimental Design

Here are some situations where quasi-experimental designs may be appropriate:

  • When the research question involves investigating the effectiveness of an intervention, policy, or program : In situations where it is not feasible or ethical to randomly assign participants to intervention and control groups, quasi-experimental designs can be used to evaluate the impact of the intervention on the targeted outcome.
  • When the sample size is small: In situations where the sample size is small, it may be difficult to randomly assign participants to intervention and control groups. Quasi-experimental designs can be used to investigate the impact of an intervention without requiring a large sample size.
  • When the research question involves investigating a naturally occurring event : In some situations, researchers may be interested in investigating the impact of a naturally occurring event, such as a natural disaster or a major policy change. Quasi-experimental designs can be used to evaluate the impact of the event on the targeted outcome.
  • When the research question involves investigating a long-term intervention: In situations where the intervention or program is long-term, it may be difficult to randomly assign participants to intervention and control groups for the entire duration of the intervention. Quasi-experimental designs can be used to evaluate the impact of the intervention over time.
  • When the research question involves investigating the impact of a variable that cannot be manipulated : In some situations, it may not be possible or ethical to manipulate a variable of interest. Quasi-experimental designs can be used to investigate the relationship between the variable and the targeted outcome.

Purpose of Quasi-Experimental Design

The purpose of quasi-experimental design is to investigate the causal relationship between two or more variables when it is not feasible or ethical to conduct a randomized controlled trial (RCT). Quasi-experimental designs attempt to emulate the randomized control trial by mimicking the control group and the intervention group as much as possible.

The key purpose of quasi-experimental design is to evaluate the impact of an intervention, policy, or program on a targeted outcome while controlling for potential confounding factors that may affect the outcome. Quasi-experimental designs aim to answer questions such as: Did the intervention cause the change in the outcome? Would the outcome have changed without the intervention? And was the intervention effective in achieving its intended goals?

Quasi-experimental designs are useful in situations where randomized controlled trials are not feasible or ethical. They provide researchers with an alternative method to evaluate the effectiveness of interventions, policies, and programs in real-life settings. Quasi-experimental designs can also help inform policy and practice by providing valuable insights into the causal relationships between variables.

Overall, the purpose of quasi-experimental design is to provide a rigorous method for evaluating the impact of interventions, policies, and programs while controlling for potential confounding factors that may affect the outcome.

Advantages of Quasi-Experimental Design

Quasi-experimental designs have several advantages over other research designs, such as:

  • Greater external validity : Quasi-experimental designs are more likely to have greater external validity than laboratory experiments because they are conducted in naturalistic settings. This means that the results are more likely to generalize to real-world situations.
  • Ethical considerations: Quasi-experimental designs often involve naturally occurring events, such as natural disasters or policy changes. This means that researchers do not need to manipulate variables, which can raise ethical concerns.
  • More practical: Quasi-experimental designs are often more practical than experimental designs because they are less expensive and easier to conduct. They can also be used to evaluate programs or policies that have already been implemented, which can save time and resources.
  • No random assignment: Quasi-experimental designs do not require random assignment, which can be difficult or impossible in some cases, such as when studying the effects of a natural disaster. This means that researchers can still make causal inferences, although they must use statistical techniques to control for potential confounding variables.
  • Greater generalizability : Quasi-experimental designs are often more generalizable than experimental designs because they include a wider range of participants and conditions. This can make the results more applicable to different populations and settings.

Limitations of Quasi-Experimental Design

There are several limitations associated with quasi-experimental designs, which include:

  • Lack of Randomization: Quasi-experimental designs do not involve randomization of participants into groups, which means that the groups being studied may differ in important ways that could affect the outcome of the study. This can lead to problems with internal validity and limit the ability to make causal inferences.
  • Selection Bias: Quasi-experimental designs may suffer from selection bias because participants are not randomly assigned to groups. Participants may self-select into groups or be assigned based on pre-existing characteristics, which may introduce bias into the study.
  • History and Maturation: Quasi-experimental designs are susceptible to history and maturation effects, where the passage of time or other events may influence the outcome of the study.
  • Lack of Control: Quasi-experimental designs may lack control over extraneous variables that could influence the outcome of the study. This can limit the ability to draw causal inferences from the study.
  • Limited Generalizability: Quasi-experimental designs may have limited generalizability because the results may only apply to the specific population and context being studied.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Research Methods

Qualitative Research Methods

Ethnographic Research

Ethnographic Research -Types, Methods and Guide

Transformative Design

Transformative Design – Methods, Types, Guide

Textual Analysis

Textual Analysis – Types, Examples and Guide

Questionnaire

Questionnaire – Definition, Types, and Examples

Observational Research

Observational Research – Methods and Guide

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

The Use of Two Media of Instruction in Biology: A Quasi-Experimental Study

Profile image of Ijaems  Journal

This paper determined the effectiveness of the two media of instruction, English and Filipino, in selected topics in Biology using quasi-experimental research. Two sections of Grade 8 students were the respondents of this study. The researchers found that the difference in scores of the two groups was statistically significant. Students who were subjected to English as a medium of instruction recorded a significantly higher posttest score than those students who were taught using Filipino. Thus, English as a medium of instruction is more effective in teaching selected topics in Biology.

Related Papers

Andrew Envulu Clement

envulu clement

The study investigated the Effects of Demonstrational Instructional Strategy on Senior Secondary School Students Academic Achievement in Biology. This study became necessary because of the unavailability of instructional materials for teaching biology in the secondary schools. The study employed a quasi experimental design, specifically the pretest-posttest non equivalent group design. One hundred students from comprised of male and female randomly selected from the senior secondary one students. The one hundred students shall be up of (50) fifty male and female students for sample of the study. The experts validated the instrument Biology Achievement Test (BAT). Three research questions were answered and two hypotheses were tested. The data were analyzed using mean, standard deviation. The results revealed that students taught using demonstrational instructional strategy materials performed better than students taught using conventional material; male students did not perform better than their female counterparts in Biology; rural students performed better than urban students in biology; The results do not suggest ordinal interaction effects between mode of method and gender on students’ achievement in biology. This was because at all the levels of gender, the mean scores were higher for student’s improvised demonstration instructional material strategy; the result suggests ordinal interaction effects between modes of method and location on students’ achievement in Biology; this was because at all the levels of location, the mean scores were higher for student’s improvised instructional material compared to conventional materials with lower mean scores; there was significant difference in the mean score of students taught using students improvised instructional material and those taught using conventional instructional materials; there was no significant difference in the mean achievement scores of male and female students in biology; there was significant difference in the mean achievement scores of urban and rural students in biology; The interaction effect of method and gender on students mean achievement scores in Biology was, not statistically significant. The interaction effect of method and location on students’ mean achievement scores in Biology was, not statistically significant. Based on the findings and implications, it was recommended that teaching of Biology in secondary school should be conducted in a manner that students will effectively understand and learn the concept taught. It was suggested that further research could be carried out on this topic using true experimental research design.

quasi experimental tagalog

Devdatta Lad

Biology or Biological science is a branch of science that studies living organisms. The main aim of the present study is to achieve a comparative analysis as to which method is effective in Biology teaching the traditional method i.e. using Blackboard or multimedia method i.e. using PowerPoint Slides, Printed transparencies on OHP, etc. In the present research study the control group was taught certain topics of biology using the traditional chalk and blackboard, whereas, the same topics of biology were taught to the experimental students by using multimedia PowerPoint presentation. In the pretest of the control group, the value of mean, median and mode are 6.56, 7.5 and 9 respectively. In the pretest of the experimental group, the value of mean, median and mode are 6.26, 5.5 and 4 respectively. In the posttest of the control group, the value of mean, median and mode are 16.5, 16.5 and 14 respectively. In the posttest of the experimental group, the value of mean, median and mode are...

Francisco M. Ebio

The study generally aimed to determine the instructional mechanisms utilized by Biology teachers in the five (5) public secondary schools in Naval, Division of Biliran and identify its relationship to students ’ academic performance. It utilized the descriptive survey method. The survey included seven (7) Biology teachers and 904 Biology students. Teacher-respondents were generally considered young. Most of them were females. Majority of them were taking units leading to M.A. degree. Most of them were least experienced in the job. The in-service trainings they have attended were limited only to school and division levels. The instructional materials they utilized in teaching Biology subject were fairly adequate. With respect to instructional mechanisms, lecture and recitation method was most often used while computer-aided instruction was rarely used. Of the evaluation measures, multiple choice was the only measure used always while diagrams or pictures and role play were rarely use...

Proceedings of the The 3rd International Conference Community Research and Service Engagements, IC2RSE 2019, 4th December 2019, North Sumatra, Indonesia

Martina Napitupulu

nur sanniea

Psychology and Education: A Multidisciplinary Journal

Psychology and Education , Mar G. Ocampo

This study determined the effect of computer-aided lesson on the performance of high school sophomore students in selected topics in biology at Baras National High School, Baras Rizal. The selected topic is based on the least scored correct response of the Division Achievement Test that given by the researcher. The identified topics are Cell, Reproduction, Life Energy and Organ. A pre-test was developed pilot tested to 50 third years high school students. The test questions were item analyzed and used as the pre test and post test of the study. The used CAI in Biology used in the study is issued by the Department of Education as one of the instructional supplies of the school. The respondents were selected through purposive sampling. It involved two sections of the second year. Each section consists of forty-five (45) students. The first group was the control group who were taught using the traditional method while the second group were taught using Computer-Aided Instruction. Since the used respondents are heterogeneous in nature, variables such as grades are no longer considered in the study. Other variables such as grades of the students is not included since the study focused on the effect of the Computer aided instructional material focusing on the result of the pretest and post-test. The questionnaire checklist used is adopted in the study of Robles. Pre-test was administered to the two groups which was prior to the use of the traditional method and computer aided instruction. After which posttest was administered. The data gathered were statistically analyzed and interpreted. The result of the study was used to develop the enhanced computer aided instructional activities in Biology. Those who were taught the traditional way have an average raw score of 9.04 with numerical value of 75 and coefficient of variation 36.28 of which is equivalent to poor performance. Those who were taught with computer aided lessons in biology got the average raw score of 8.67 with equivalent score of 75 and a coefficient of variat ion of 28.49 equivalents to poor performance. The students performed best in the topic life energy, followed by organ system then by cell structure and performed least on reproduction. Topic Organ System, the F value obtained is 17.971 with p-value of 0.000 which is less than the apha .05, therefore the null hypothesis is rejected. There is significant difference between the performance of the students exposed to computer aided lessons and those taught using the traditional method.

Journal of Research in Science Teaching

sdsaf ewdfasd

International Journal of Science, Technology, Engineering and Mathematics

Romel C . Mutya

This study investigates the effectiveness of Computer-Based Instruction (CBI) in teaching Biology to 7th graders of a secondary night school in Cebu City, Philippines. A pretest and posttest quasi-experimental design with a control group was utilized to two groups of students, of which one was exposed to CBI and the other to the conventional lecture method (CLM). An Instructional Materials Motivation Survey (IMMS) was used to assess its motivational characteristics. Data gathered were analyzed using descriptive statistics, frequency count and percentage, mean and standard deviation, t-test. Findings revealed that both groups had Fairly Satisfactory performance in the pretest, which implies that the students had low knowledge on the topic. The study also found that both groups had significantly increased their performances from the pretests to the posttests, implying the essence of CLM and CBI use. Ultimately, the study revealed that the use of CBI is more effective than CLM, as seen...

Computer Science and Information Technology

Francis Onuman

Journal of Educational Research in Developing Areas

MUTIU OWOLABI ADESOLA

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

mafe camaso

International Letters of Social and Humanistic Sciences

Oji Effiong

IJAR Indexing

Prof. Amosa I S I A K A GAMBARI

Online Science Education Journal

Danilo V . Rogayan Jr.

Evi Roviati

agus hidayat

JPBIO (Jurnal Pendidikan Biologi)

Indah Karunia Sari

Procedia - Social and Behavioral Sciences

Çiğdem Karakaya Akçadağ , E. Çobanoğlu

ijifr journal

Leoni Afikawati

International Journal of Humanities, Literature & Arts

Felina Espique

Giacomo Frate

IJAERS Journal

Mohammad Nurohman

The Eurasia Proceedings of Educational and Social Sciences (EPESS)

Adedamola Kareem

Riza Mae Sanchez

International Journal of Research Studies in Education

CARLO OCAYO

intakhab khan

Educational Alternatives

Yavuz Bağcı

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Research Methodologies Guide

  • Action Research
  • Bibliometrics
  • Case Studies
  • Content Analysis
  • Digital Scholarship This link opens in a new window
  • Documentary
  • Ethnography
  • Focus Groups
  • Grounded Theory
  • Life Histories/Autobiographies
  • Longitudinal
  • Participant Observation
  • Qualitative Research (General)

Quasi-Experimental Design

  • Usability Studies

Quasi-Experimental Design is a unique research methodology because it is characterized by what is lacks. For example, Abraham & MacDonald (2011) state:

" Quasi-experimental research is similar to experimental research in that there is manipulation of an independent variable. It differs from experimental research because either there is no control group, no random selection, no random assignment, and/or no active manipulation. "

This type of research is often performed in cases where a control group cannot be created or random selection cannot be performed. This is often the case in certain medical and psychological studies. 

For more information on quasi-experimental design, review the resources below: 

Where to Start

Below are listed a few tools and online guides that can help you start your Quasi-experimental research. These include free online resources and resources available only through ISU Library.

  • Quasi-Experimental Research Designs by Bruce A. Thyer This pocket guide describes the logic, design, and conduct of the range of quasi-experimental designs, encompassing pre-experiments, quasi-experiments making use of a control or comparison group, and time-series designs. An introductory chapter describes the valuable role these types of studies have played in social work, from the 1930s to the present. Subsequent chapters delve into each design type's major features, the kinds of questions it is capable of answering, and its strengths and limitations.
  • Experimental and Quasi-Experimental Designs for Research by Donald T. Campbell; Julian C. Stanley. Call Number: Q175 C152e Written 1967 but still used heavily today, this book examines research designs for experimental and quasi-experimental research, with examples and judgments about each design's validity.

Online Resources

  • Quasi-Experimental Design From the Web Center for Social Research Methods, this is a very good overview of quasi-experimental design.
  • Experimental and Quasi-Experimental Research From Colorado State University.
  • Quasi-experimental design--Wikipedia, the free encyclopedia Wikipedia can be a useful place to start your research- check the citations at the bottom of the article for more information.
  • << Previous: Qualitative Research (General)
  • Next: Sampling >>
  • Last Updated: Aug 12, 2024 4:07 PM
  • URL: https://instr.iastate.libguides.com/researchmethods

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

The effect of proactive, interactive, two-way texting on 12-month retention in antiretroviral therapy: Findings from a quasi-experimental study in Lilongwe, Malawi

Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft

* E-mail: [email protected]

Affiliations Department of Global Health, University of Washington, Seattle, WA, United States of America, International Training and Education Center for Health (I-TECH), Seattle, WA, United States of America

ORCID logo

Roles Formal analysis, Investigation, Methodology, Writing – original draft

Affiliations Department of Global Health, University of Washington, Seattle, WA, United States of America, Department of Emergency Medicine, University of Washington, Seattle, WA, United States of America

Roles Investigation, Supervision, Writing – original draft

Affiliation Lighthouse Trust, Lilongwe, Malawi

Roles Investigation, Writing – original draft

Affiliations International Training and Education Center for Health (I-TECH), Seattle, WA, United States of America, Lighthouse Trust, Lilongwe, Malawi

Roles Supervision, Writing – original draft

Roles Data curation, Writing – original draft

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Project administration, Supervision, Validation, Writing – original draft

  • Caryl Feldacker, 
  • Robin E. Klabbers, 
  • Jacqueline Huwa, 
  • Christine Kiruthu-Kamamia, 
  • Agness Thawani, 
  • Petros Tembo, 
  • Joseph Chintedza, 
  • Geldert Chiwaya, 
  • Aubrey Kudzala, 

PLOS

  • Published: August 29, 2024
  • https://doi.org/10.1371/journal.pone.0298494
  • Reader Comments

Fig 1

Retaining clients on antiretroviral therapy (ART) is challenging, especially during the first year on ART. Mobile health (mHealth) interventions show promise to close retention gaps. We aimed to assess reach (who received the intervention?) and effectiveness (did it work?) of a hybrid two-way texting (2wT) intervention to improve ART retention at a large public clinic in Lilongwe, Malawi.

Between August 2021—June 2023, in a quasi-experimental study, outcomes were compared between two cohorts of new ART clients: 1) those opting into 2wT who received automated, weekly motivation short messaging service (SMS) messages and response-requested appointment reminders; and 2) a matched historical cohort receiving standard of care (SoC). Reach was defined as “the proportion clients ≤6 months of ART initiation eligible for 2wT”. 2wT effectiveness was assessed in time-to-event analysis. Retention was presented in a Kaplan-Meier plot and compared between 2wT and SoC using a log-rank test. The effect of 2wT on ART dropout (lost to follow-up or stopped ART) was estimated using Fine-Gray competing risk regression models, adjusting for sex, age and WHO HIV stage at ART initiation.

Of 1,146 clients screened, 501 were eligible for 2wT, a reach of 44%. Lack of phone (393/645; 61%) and illiteracy (149/645; 23%) were the most common ineligibility reasons. Among 468 participants exposed to 2wT, 12-month probability of ART retention was 91% (95% CI: 88% - 94%) compared to 76% (95% CI: 72% - 80%) among 468 SoC participants (p<0.001). Compared to SoC, 2wT participants had a 65% lower hazard of ART dropout at any timepoint (sub-distribution hazard ratio 0.35, 95% CI: 0.24–0.51; p<0.001).

Conclusions

2wT did not reach all clients. For those who opted-in, 2wT significantly increased 12-month ART retention. Expansion of 2wT as a complement to other retention interventions should be considered in other low-resource, routine ART settings.

Citation: Feldacker C, Klabbers RE, Huwa J, Kiruthu-Kamamia C, Thawani A, Tembo P, et al. (2024) The effect of proactive, interactive, two-way texting on 12-month retention in antiretroviral therapy: Findings from a quasi-experimental study in Lilongwe, Malawi. PLoS ONE 19(8): e0298494. https://doi.org/10.1371/journal.pone.0298494

Editor: Hamufare Dumisani Dumisani Mugauri, University of Zimbabwe Faculty of Medicine: University of Zimbabwe College of Health Sciences, ZIMBABWE

Received: January 30, 2024; Accepted: July 15, 2024; Published: August 29, 2024

Copyright: © 2024 Feldacker et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript and its Supporting Information files.

Funding: Research reported in this publication was supported by the Fogarty International Center of the National Institutes of Health ( https://www.fic.nih.gov/ ) under Award Number R21TW011658/R33TW011658, under PI Feldacker and multiple PI Tweya. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders played no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

In sub-Saharan Africa (SSA), persistent gaps in retention on antiretroviral therapy (ART) among specific groups and geographies of people living with HIV (PLHIV) threaten decades of impressive progress [ 1 ]. Loss to follow up (LTFU) is highest among those newly on ART through 6 months in care [ 2 , 3 ]. Gaps in retention are a problem: even short treatment interruptions can lead to increased client morbidity, mortality, drug resistance, and HIV transmission risk [ 4 ]. Most efforts to address LTFU are reactive, waiting for clients to miss visits before intervening, resulting in tracing delays that reduce the likelihood of finding, returning, and retaining clients in care [ 5 , 6 ]. Addressing retention gaps is costly, often relying on healthcare workers (HCWs) to call or trace clients identified as LTFU in-person [ 7 ]. Recent reductions in global funding, chronic shortages of HCWs, and increasing client volumes exacerbate retention challenges. Proactive, lower-intensity, effective retention interventions, especially for ART clients in their first year in care, are needed in routine, low-resource SSA settings.

Numerous mobile health (mHealth) interventions show promise to significantly increase ART retention (alive in care, adherence to ART, visit compliance) among adults [ 8 – 14 ]. Effectiveness, however, is not guaranteed; not all mHealth interventions are associated with improved ART retention [ 15 – 19 ]. Previous mHealth intervention research suggests that several mHealth intervention characteristics raise the likelihood of ART retention impact. First, lower technology approaches that rely on short-message service (SMS) which requires only feature phones appear better suited to low- or middle-income country (LMIC) settings [ 20 , 21 ]. SMS-focused interventions show high acceptability [ 11 , 22 – 24 ] and reduce digital health equity concerns associated with apps that require smartphones [ 25 ]. Second, interactive interventions that enhance communication between clients and HCWs are more effective than one-way blast communication to engage clients in care [ 13 , 26 ]. Interaction potentially diminishes message fatigue and facilitates more personalized, intensive support when needed [ 27 ]. Third, engagement of diverse stakeholders throughout the design, testing, and evaluation process creates ownership and buy-in [ 28 ], helping tailor the right interventions to the local context [ 29 ]. Lastly, iterative monitoring and evaluation (M&E) of mHealth in accordance with digital health best practices suggested by the World Health Organization (WHO) helps to ensure these interventions complement, as opposed to conflict with, ongoing health system strengthening [ 30 , 31 ].

Malawi is an ideal location to assess mHealth to improve client retention. Malawi is a low-resource country with an adult HIV prevalence of ~7% [ 32 ]. Although progress towards 95-95-95, the global targets that state that by 2025, 95% of people living with HIV should be aware of their HIV status, 95% of those diagnosed with HIV should receive sustained ART, and 95% of those receiving ART should be virally suppressed, appears on track [ 4 ], LTFU is high, especially during in the first year on ART [ 33 ]. Five years after ART initiation, only 54% of PLHIV are retained in ART care [ 34 ]. Given pervasive HCW shortages [ 35 ], alternatives to human resource-intensive solutions are needed. In Malawi, the Ministry of Health (MoH) has tested several ART-related innovations at Lighthouse Trust (LT), one of the largest ART providers and a WHO-recognized ART Centre of Excellence in Lilongwe, the capital [ 36 ]. LT provides integrated HIV care to 38,000 ART clients in its two flagship clinics in urban Lilongwe: 25,000 at Martin Preuss Centre (MPC) and 13,000 at Lighthouse clinic (LH). At LT, retention at twelve months post-ART initiation is estimated at 73%, falling to 63% by 24 months.

In 2006, LT established an intensive client tracing program, “Back-To-Care” (B2C), to trace and return clients to care after they miss ART dispensing visits. B2C does not provide visit reminders. As part of B2C, every week health workers manually generate a tracing list of clients who missed their ART visit by ≥14 days. Subsequently, up to three calls or home visits are attempted to encourage each client on this list who misses their scheduled appointment to return to care. B2C has been well-recognized for its success returning clients to care [ 37 – 41 ]. However, B2C, like other reactive tracing efforts, is highly resource intensive [ 38 ]. At MPC clinic from July-September, 2023, there were 18,842 scheduled ART visits; 1,798 clients (10%) missed visits by ≥14 days and were referred to B2C. With five, full-time B2C tracers, only 40% (719/1798) of potential LTFU clients during that period were successfully found. B2C tracing efforts are stretched, leading to delayed or missed tracing. LT needs proactive, effective, retention innovations that reflect the reality of routine, low-resource, public settings.

In 2021, LT and partners at the University of Washington’s International Training and Education Center for Health (I-TECH) and Medic developed a two-way texting (2wT) system to improve early retention at MPC clinic. 2wT is a hybrid (automated and interactive) intervention combining weekly non-HIV-related motivational messaging and response requested scheduled ART visit reminders, aiming to provide proactive retention support to prevent care gaps before they happen. Early 2wT usability assessment among new ART initiates (within 6 months of initiation) demonstrated high client acceptability and support for the 2wT approach [ 42 ].

We aimed to assess the impact of the 2wT intervention on our primary research objective, to improve 12-month retention among new ART initiates at MPC, using a quasi-experimental design. We employed an implementation science (IS) approach to enhance the quality, speed, and impact of translating 2wT research findings into routine practice [ 43 ]. We applied the RE-AIM framework (reach, effectiveness, adoption, implementation, maintenance) to guide our evaluation [ 44 ] and further understanding for whom, where, why and how 2wT works [ 45 ]. In this paper, we assess 2wT reach by describing the participant flow from 2wT screening, eligibility, and enrollment. With our primary objective to improve ART retention, we examine 2wT effectiveness by comparing ART retention at 12 months between 1) new ART clients who opted into 2wT (intervention) versus 2) a historical cohort of routine MPC new initiates who received standard of care (SoC) (comparison). We hypothesized that 2wT would improve 12-month retention.

The study was conducted at MPC, LT’s largest urban clinic in Lilongwe, Malawi. LT operates as part of MoH ART service delivery and employs the MoH electronic medical record system (EMRS) at MPC [ 46 ]. On average, MPC initiates 450 PLHIV on ART per quarter following the test and treat strategy. At ART registration, clients’ demographics, phone number(s) and WHO HIV stage (4-level clinical staging system for HIV ranging from ‘stage 1’ (asymptomatic) to ‘stage 4’ (AIDS)) are captured in the EMRS. During the first three to six months after ART initiation, clients are seen monthly, after which, if clients are stable and adherent to ART, visit frequency is typically decreased to once every three- or six-months. The vast majority of clients are initiated on an oral first-line ART regimen of tenofovir/lamivudine/dolutegravir (TDF + 3TC + DTG), or if they weigh <30kg, on abacavir/lamivudine/dolutegravir (ABC + 3TC + DTG) [ 47 ]. ART is taken daily, and treatment is lifelong.

2wT behavior change theory

2wT design was informed by the theory that prompts can spur action to change [ 48 ]. By using SMS to target key individual-level constructs proven effective in previous HIV-related behavior change programs [ 49 – 52 ] it is expected that the 2wT intervention will improve early retention on ART ( Fig 1 ). To increase behavioral control over timely attendance at scheduled ART visits, participants are reminded by SMS in advance to provide time to arrange transport or free their schedules. 2wT helps improve participant motivation to make decisions for their own wellness, including adhering to ART, via weekly non-HIV (neutral) messages or educational content that support participant engagement in their health. Initial 2wT participant education and subsequent interaction with 2wT officers encourages self - efficacy by providing participants with an opportunity to request visit date changes, report transfers, or communicate about any issues related to their visits.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

ART: Antiretroviral therapy; SMS: Short messaging service; 2wT: Two-way texting.

https://doi.org/10.1371/journal.pone.0298494.g001

Retention interventions

Standard of care (soc) retention support..

The historical comparison cohort received SoC at MPC ( Box 1 ). As of mid-2019, as part of SoC, all new ART initiates at MPC were assigned an ART Buddy , a PLHIV on ART at LT who would serve as a peer “buddy” for their first 12 months on ART, only. The role of the ART Buddy is to guide clients through early ART care by providing visit reminders, health education, disclosure support, and adherence counseling. ART Buddies are considered Expert Clients (a cadre of paid, trained, peer-supporters). Each ART Buddy is paid a nominal fee to support ~15 new ART initiates. Communication between ART Buddies and clients both before and after missed visits occurs mainly through phone calls, which are favored for the ability to verify client identity, but SMS communication is also utilized.

The 2wT and historical SoC comparison cohort received different retention support ( Box 1 ).

Box 1. Retention support received by 2wT and SoC participants

thumbnail

https://doi.org/10.1371/journal.pone.0298494.t001

2wT intervention retention support.

The 2wT approach was developed in accordance with the Principles for Digital Development [ 53 ] and based on the open-source Community Health Toolkit (CHT) [ 54 ]. 2wT’s easy-to-use CHT-based design resulted from an iterative human-centered design (HCD) process that incorporated feedback from LT clients, HCWs, and retention officers, described previously [ 55 ]. In brief, 2wT is a free, proactive, mHealth intervention that combines automated motivation messages with interactive individualized visit reminders ( Fig 2 ) [ 42 ]. 2wT does not use an app, nor does it require participants to download anything. ART clients who opt into 2wT are sent weekly one-way, “blast” motivation messages containing non-HIV-related content such as generic messages of encouragement (e.g., “You can do it”, “You are making great choices every day”) and general health advice (e.g., “Drink boiled and clean water”, “Seek help when you do not feel well”). Additionally, 2wT participants receive response-requested individualized visit reminders day 3 and 1 before their scheduled clinic visit. Participants who confirm visit reminders with a “yes” end the dialogue. A "no" triggers interactive SMS with a HCW in which participants may change their appointment dates, report transfers, or chat about other visit-related issues. Participants who miss a scheduled clinic visit are sent follow-up reminders 2, 5, and 11 days after the appointment (unless they return to care). Messages are stopped upon participant request, transfer or death. There are no costs associated with sending or receiving 2wT messages for participants. 2wT clients were not assigned ART Buddies.

thumbnail

ART: Antiretroviral therapy; B2C: Back to care; EMRS: Electronic medical record system; SMS: Short messaging service; 2wT: Two-way texting.

https://doi.org/10.1371/journal.pone.0298494.g002

Study design, cohort creation and sample size

We hypothesized that 2wT would improve 12-month retention by at least 10%, improving from 73% at baseline to 83% at 12-months. A quasi-experimental design was used to assess the effect of 2wT on 12-month ART retention by comparing retention among a cohort of 2wT participants to that among a matched historical comparison cohort receiving SoC at MCP one year prior to 2wT implementation. The intervention cohort was matched 1:1 on age (bands of 5 years), sex, and WHO stage at initiation via random selection from an MPC dataset of 1,455 ART clients who had a phone number. Based on a baseline 12-month retention of 73% at MPC in March 2021, an expected 2wT effect size of ≥10%, and a power of 90%, 438 participants would be required in each arm to detect a difference in 12-month retention between 2wT and SoC. We aimed to recruit a 10–15% larger sample size to accommodate 2wT participant transfer-out, withdraws and deaths. 2wT participants-initiated ART between May 2021 –April 2022 and were followed through 12 months post-ART initiation. The historical, matched, comparison SoC cohort initiated ART at MPC between November 2019—November 2020, before 2wT launch.

2wT recruitment: Screening and enrollment

During the 2wT enrollment period (August 2021—April 2022), all new ART clients at MPC were screened for study eligibility, including: 1) initiated ART <6 months prior; 2) ≥18 years; 3) possessed phone at enrolment; 4) had basic literacy; 5) completed informed consent; and 6) received their 2wT enrollment text. Eligible participants enrolled to receive messages in either Chichewa or English, based on their preference, but could send SMS in any language. Participants received instructions on how to respond to 2wT messages.

Data collection

For both the 2wT and SoC cohort, data on routine MoH ART outcomes were extracted from the EMRS for participants’ first twelve months post-ART initiation (May 2021 –June 2023 for 2wT and November 2019 –January 2022 for SoC). These outcomes could be either 1) alive and on ART (alive and retained in ART care on the date of record review); 2) stopped ART treatment (alive but informed the clinic they stopped ART); 3) transferred (documented move to another facility); 4) dead (all cause mortality); and 5) LTFU (no return to clinic within 60 days of a scheduled visit) [ 56 ]. To ensure correct ART outcome ascertainment, intervention and comparison group outcomes were updated for 60 days beyond the 12-month period. Participants who requested to stop visit reminder messages were considered to have withdrawn from the study and were classified as withdrew . SMS data was obtained from the 2wT database and the SMS aggregator, Africa’s Talking.

Study outcomes

Reach was defined as the proportion of screened PLHIV eligible and willing to participate in 2wT and was measured using screening data. As definitions of ART retention vary [ 57 ], we define 2wT effectiveness to improve 12-month ART retention using Malawi MoH ART outcome definitions. We consider 2wT to be effective if a greater proportion of 2wT clients compared to SoC clients is retained as “alive and on ART” in contrast to having dropped out of care , which we defined as having either “stopped ART treatment” or become “LTFU”, 12 months post-ART initiation. We defined retained and dropout as mutually exclusive and opposite categories. We examined message response rates and 6-month retention on ART care as secondary analyses.

Statistical analysis

2wT participants were considered exposed to the intervention and included in analysis if at least one visit reminder SMS was successfully delivered to their phone within the 12-month follow-up period. Descriptive statistics were used to present matching success between the 2wT and SoC cohort, MoH ART outcomes at 6- and 12-month follow-up for 2wT vs. SoC clients, and describe 2wT reach. Counts and frequencies report the number of messages sent by the 2wT platform and by participants as well as message response rates. Chi-square tests were performed to compare the distribution of ART outcomes between the 2wT and SoC cohort. 2wT participants who withdrew from the study were excluded from the denominator.

Time-to-event analysis was conducted to compare retention probability between the 2wT and SoC arm and to assess the effect of the 2wT intervention on the primary outcome of interest: ART retention, defined as having the MoH ART outcome “alive and on ART”. ART ‘dropout’ (either “stopped ART treatment” or “LTFU”) was considered a failure event. “Transferred”, “withdrew”, and “dead” were censoring events. Participants were censored twelve months after ART initiation. 2wT clients who enrolled in the study after ART initiation were considered to come under observation at the time of enrolment but entered analysis taking account of the accumulated time on ART. A Kaplan-Meier survival plot, chosen for its widespread use, simplicity and ease of interpretation, was created to provide a visual representation of the probability of ART retention in both study arms over time [ 34 , 58 ]. Probabilities of being retained alive on ART six and twelve months post-ART initiation are reported, comparing 2wT and SoC groups using a log-rank test.

To address potential bias in the Kaplan-Meier results resulting from the presence of competing risks, we conducted a sensitivity analysis in which death was considered a competing risk. Cumulative incidence functions (CIFs) were used to assess dropout in the 2wT and SoC study arms [ 59 ] and findings were qualitatively compared with Kaplan-Meier results. Using the Fine-Gray competing risks approach [ 60 ] the association between study arm (2wT or SoC) and ART drop out was modelled, adjusting for sex, age and WHO HIV stage at ART initiation to assess the ability of 2wT to reduce dropout, and thereby improve ART retention. We assessed the proportional sub-distribution hazards assumption using graphical methods and assessed interaction between the intervention and relevant covariates. The unadjusted sub-distribution hazard ratio (sHR) as well as the adjusted sHRs for all covariates were reported with 95% confidence intervals (CI).

The study protocol was approved by the Malawi National Health Sciences Research Committee (#20/06/2565) and the University of Washington, Seattle, USA ethics review board (STUDY000101060). At enrolment, 2wT participants provided written informed consent in either Chichewa or English, according to their preference. SoC clients did not consent as only de-identified, routine monitoring and evaluation data was collected from EMRS.

2wT Reach: Enrollment

The study team screened 1,146 new ART clients at MPC to yield a cohort of 501 2wT participants who were eligible and enrolled in the intervention, an intervention reach of 44% ( Fig 3 ). The most common reasons for ineligibility were lack of phone access (393, 61%), illiteracy (149, 23%), and age under 18 years (48, 7%). Of the 501 enrolled participants, 468 (94%) were exposed to the intervention and included in the analysis. The majority of 2wT participants were female (56%), had an average age of 33 years, and WHO HIV stages 1 or 2. Among 2wT participants, 373 (80%) were enrolled at the ART initiation visit while the remaining 95 (20%) were enrolled during a subsequent visit, on average 99 days (standard deviation (SD): 53 days) after ART initiation. The matching process successfully created similarity in matched demographic and clinical characteristics between 2wT and SoC groups ( Table 1 ).

thumbnail

ART: Antiretroviral therapy; 2wT: Two-way texting.

https://doi.org/10.1371/journal.pone.0298494.g003

thumbnail

https://doi.org/10.1371/journal.pone.0298494.t002

2wT platform engagement

During the study period, the 2wT platform recorded a total of 31,861 messages ( Fig 4 ). The 2wT system sent 27,859 SMS (87%): 18,093 motivational messages (65%); 4,561 visit reminders (16%); 1,468 missed visit reminders (5%); and 3,737 other messages (13%) ( Fig 4 ). The delivery success rate was 76% for motivation messages; 79% for visit reminders; and 75% for missed appointment reminders. Of all 31,862 messages, 4,002 messages (13%) were sent by participants. Participants responded to 39% of successfully delivered pre-visit reminders (proactive) and 32% of successfully delivered post-missed visit reminders (reactive). Of 16 (3%) participants who requested to stop 2wT messaging, 5 lost interest (31%), 5 noted confidentiality concerns (31%), and 3 noted no longer needing texts to remember their appointments (19%) as reasons to stop.

thumbnail

https://doi.org/10.1371/journal.pone.0298494.g004

MoH-defined ART outcomes at 6 and 12 months

The distribution of ART outcomes differed between the 2wT and SoC arm six months post-ART initiation (p<0.001) ( Table 2 ). Six months post-ART initiation, 88% were alive and in care in the 2wT arm as compared to 76% in the SoC arm. The 2wT arm had lower LTFU (5% vs. 11%) and lower stopped treatment (1% vs. 5%) than SoC, respectively. Differences persisted at 12 months post-ART initiation (p<0.001). In the 2wT arm, 81% of participants were alive and in care as compared to 66% of SoC. The 2wT arm had lower LTFU (7% vs 18%) and stopped treatment (2% vs 6%) compared to SoC. Roughly equal percentages of 2wT and SoC participants had transferred out (8% vs 9%) or died (2% vs 2%) at twelve months.

thumbnail

https://doi.org/10.1371/journal.pone.0298494.t003

Kaplan-Meier retention plots (alive and on ART) in the first 12 months on ART

Kaplan-Meier curves revealed differences in retention on ART over time between the 2wT and the SoC arm (p<0.001) ( Fig 5A ). In survival analysis, the probability of being retained on ART six months post-ART initiation in the 2wT group was 92% (95% CI: 90% - 95%) compared to 79% (95%CI: 75% - 83%) in the SoC group, corresponding to a probability of dropout at six months of 8% and 21%, respectively. At twelve months, the probability of retention on ART was 91% among 2wT participants (95% CI: 88% - 94%) and 76% among SoC participants (95% CI: 72% - 80%), corresponding to a probability of dropout at twelve months of 9% and 24%, respectively.

thumbnail

A: Kaplan-Meier curve of retention on ART among 2wT and SoC clients over time, displaying 95% confidence intervals and log-rank test p-value. B: Cumulative Incidence Function of ART dropout (LTFU and stopping ART treatment) among 2wT and SoC clients over time.

https://doi.org/10.1371/journal.pone.0298494.g005

2wT effectiveness: Competing risks analysis

After accounting for competing risks, cumulative incidence functions (CIFs) demonstrated probabilities of dropout consistent with Kaplan-Meier results ( Fig 5B ): 8% for 2wT compared to 21% for SoC at six months, and 9% for 2wT compared to 24% for SoC at twelve months. The results indicate higher retention among 2wT as compared to SoC clients. In unadjusted Fine-Gray competing risk regression analysis, being in the 2wT arm was associated with a 65% lower hazard of ART dropout at any point during follow-up (sHR 0.35, 95%CI: 0.24–0.51) as compared to SoC. Controlling for sex, age and WHO HIV stage in adjusted analysis, a similar positive effect of 2wT compared to SoC on dropout was found (sHR 0.35, 95% CI: 0.24–0.51). In adjusted analysis, being female was associated with a 29% lower hazard of dropout from ART care as compared to being male (sHR 0.71, 95% CI 0.52–0.99). Each additional year of age was associated with a 4% reduction in the hazard of dropout (sHR 0.96, 95% CI:0.94–0.98). There was no difference in the hazard of ART dropout comparing individuals with WHO HIV stage 1 and 2 and those with WHO HIV stage 3 at ART initiation (sHR 0.98, 95% CI 0.60–1.58) or those with WHO stage 4 (sHR 1.24, 95% CI 0.66–2.31).

As a result of the combined motivation messages and automated visit reminders of the 2wT intervention, retention in ART care at 12 months was 15% higher among 2wT participants as compared to SoC clients. At any point over the first year on ART, those with 2wT support were 65% less likely to drop out of ART care compared to their SoC peers. For both 2wT and SoC, women were less likely to drop out than men, and older clients were more likely to be retained. Improved retention among 2wT participants is most striking in the first months immediately after ART initiation, a high risk period when care gaps are most likely [ 39 ]. Still, 2wT effectiveness should not eclipse limitations in 2wT reach: over 50% of those invited to participate were ineligible, largely due to lack of phone access or illiteracy. This evidence informs discussion of the strengths and weaknesses of the 2wT approach and provides guidance for transitioning the optimization of 2wT from research to routine practice.

Several aspects of 2wT design likely contributed to success. First, we used a human-centered, co-design process with clients, HCWs, and stakeholders, informing iterative improvements that likely raised the likelihood of matching the right 2wT design to the expressed local need [ 61 ]. For participants, the 2wT approach was perceived as user-friendly, responses were easy, and both motivational and visit reminders were appreciated [ 42 ]. For HCWs, 2wT was highly usable and was perceived to improve their connection with participants, reflecting their needs [ 55 ]. Lighthouse HCWs currently maintain the day-to-day operations and management of the 2wT system independently, factors that enhance the likelihood of sustaining the intervention [ 62 ]. The open-source 2wT technology, itself, appears to be the right fit for the low-resource setting. Most participants do not have access to a smartphone and 2wT requires only basic phones with SMS capability (no smartphones, no need to download apps, no data plan) and has a web-based interface for HCWs that runs on commonly available PC computers and Android tablets. The workload also meets the low resource setting. Currently, one 2wT officer handles all client interaction for over 400 participants as compared to SoC where each ART Buddy is assigned up to 15 new ART clients. The hybrid automated and manual 2wT design relies heavily on automated 2wT reminders, both before visits and after missed appointments, keeping the workload of direct participant-HCW interaction within manageable levels.

The intervention messages and their scheduling also likely improved outcomes. First, 2wT messages were informed by health behavior theory, adapting the messages alongside software optimization to increase the strength of both [ 63 ]. Over two decades of research demonstrates the importance of having a strong theoretical model to provide rationale for how interventions may influence behaviors [ 64 ]. We suggest that 2wT content helped improve participant motivation, behavioral control, and self-efficacy, hoping to create positive habits of ART adherence from initiation onward. Second, the cadence of 2wT messaging intensity appears to suit participants. Although other mHealth with weekly response-requested messaging found no effect on 12-month retention [ 65 ], 2wT included weekly motivational or education messages without requesting a response and only required participants to interact with the system or HCWs with a single “1 = yes” or 0 = no” to confirm visits a few days before appointments, with option to interact more if needed or desired. The 2wT nudge and minimal participant effort likely lessened the potential fatigue that more messages could cause [ 13 ]. Furthermore, a previous mHealth qualitative study noted that fears of unintended disclosure from HIV-related message content could reduce SMS intervention participation or uptake [ 11 ]. Using suggestions from current 2wT users, 2wT may have found the right type of mixed educational and motivational content, without HIV-related messaging.

Despite effectiveness, effort will be needed to expand 2wT reach as only 44% of those screened for participation met the 2wT eligibility criteria. Current access in Malawi to mobile phones in 2022 was estimated at 60% [ 66 ] with access among females likely lower [ 67 ]. However, growth in mobile phone ownership is expected to rise, potentially reducing these concerns in future. Likewise, as more than 25% of adults in Malawi are unable to read or write [ 68 ], future voice functionality in 2wT or improved “flash” features (calling a number to trigger a voice call back) could expand reach in response to low literacy client needs [ 13 ]. As 2wT participants noted a preference for SMS over calls, given that messages are discrete and do not require a participant to pick up or attend to them at a specific time [ 42 ], voice should not replace SMS, but augment existing options. Additive retention models, where clients can have more than one retention support may also lead to gains [ 12 ], improving reach and impact as 2wT moves from research to routine practice. 2wT should complement rather than replace other retention support initiatives such as the SoC ART buddies, whose firsthand experience navigating challenges related to HIV may provide a valuable source of peer-support for newly initiated clients. Expanding enrolment into 2wT for any client on ART, and including clients ages 15 and older (the age of consent in Malawi) combined with efforts to improve 2wT awareness among LT clients who come on evenings and weekends (when study team were not available) could improve uptake of 2wT retention support. A forthcoming costing study of 2wT versus SoC retention approaches will shed light on the feasibility and cost-effectiveness of expanding 2wT and ART Buddy support [ 69 ], providing Lighthouse with guidance to drive retention program decision-making.

Limitations

Our findings should be considered in light of limitations. First, it is possible that 2wT effect could diminish over time [ 70 ], and future studies of 2wT reach adaptation, flexibility, or optimization should explore how to maintain the early retention gains. Second, despite successful matching, using a historical comparison group may pose a threat to internal validity, especially given the different temporal influences of COVID-19 on participants. Third, the extent to which the two study cohorts received the intended retention support, i.e., whether 2wT participants actually received 2wT (were SMS read?) or whether PLHIV buddies actually called SoC participants (did ART Buddies provide intended support?) is unknown, calling for future fidelity investigation of both 2wT and SoC interventions in practice. Moreover, 2wT was opt-in, allowing those who were eligible and interested to volunteer for the retention support. Participants who choose an intervention are likely more open and responsive to the support. Additionally, 20% of 2wT participants were enrolled during a subsequent clinic visit after their ART initiation, which may have introduced selection bias. These participants, by virtue of still being in care at that point in time, likely represent a group with greater engagement than general ART population, which includes clients who do not attend care beyond their initiation visit. Inclusion of CD4 count data was not possible due to large quantities of missing data; however, consideration of CD4 data would be valuable in future analysis when CD4 machines are integrated with EMRS, providing quality data. Lastly, we did not include the outcomes of those who transferred out as we lacked resources to track clients to other clinics. Despite these limitations, the strengths of this quasi-experimental design in the routine Malawi ART setting suggests that this specific 2wT approach may be beneficial to improve early retention among the sizeable population who wish to opt-in.

At a high-volume, routine ART clinic in Malawi, the proactive, low-intensity 2wT approach improved 12-month retention among new ART initiates who enrolled. 2wT should be scaled as a part of, and not a replacement for, complementary retention efforts in routine ART settings in Malawi. Even with sub-optimal reach, adoption the 2wT approach as a component of routine retention efforts could benefit the ART client population as a whole by freeing existing HCWs to trace more clients presumed LTFU, returning more clients to care. More retention choices could also help cater to the diverse preferences and practicalities of retaining clients on ART over time. Given the large client volume of LT clinics, Lighthouse’s leadership, and continued MoH collaboration, expansion of the 2wT retention approach for both new and existing ART clients could positively impact overall ART program success at scale.

Supporting information

S1 file. dataset..

Retention outcomes dataset in CSV format.

https://doi.org/10.1371/journal.pone.0298494.s001

Acknowledgments

We would like to acknowledge the study participants, the study team (Kondwani Masiye, Harrison Chirwa, Blessings Wandira, Daniel Mwakanema, Madalitso Chawanje, William Maziya, Isaac Nyirenda), colleagues from Medic (Maryanne Mureithi, Femi Oni, Beatrice Wasunna, Edwin Kagereki, Adinan Alhassan, Kawere Wagaba, Evelyn Waweru, and Mourice Barasa), the MPC clinic, M&E teams, B2C team, and MoH staff at Bwaila for their invaluable contribution to the study.

  • 1. UNAIDS. GLOBAL AIDS STRATEGY 2021–2026 END INEQUALITIES. END AIDS.2021. Available from: https://www.unaids.org/en/resources/documents/2021/2021-2026-global-AIDS-strategy .
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 4. UNAIDS. IN DANGER: UNAIDS Global AIDS Update 20222022. Available from: https://www.unaids.org/sites/default/files/media_asset/2022-global-aids-update_en.pdf .
  • 8. World Health Organization. WHO guideline: recommendations on digital interventions for health system strengthening: web supplement 2: summary of findings and GRADE tables. World Health Organization, 2019.
  • 31. World Health Organization. Technical update: considerations for developing a monitoring and evaluation framework for viral load testing: collecting and using data for scale-up and outcomes. World Health Organization, 2019.
  • 32. UNAIDS. Country Overview: Malawi Geneva: UNAIDS; 2022 [cited 2023 August 6]. Available from: https://www.unaids.org/en/regionscountries/countries/malawi .
  • 46. Malawi Ministry of Health. Government of Malawi Ministry of Health Integrated HIV Program Report 2022 Q12022. Available from: https://dms.hiv.health.gov.mw/dataset/malawi-integrated-hiv-program-report-2022-q1/resource/44d5a99e-1745-4a4e-b973-37f45caad990 .
  • 48. Fogg BJ, editor A behavior model for persuasive design. Proceedings of the 4th international Conference on Persuasive Technology; 2009.
  • 50. Fishbein M, Ajzen I. Predicting and changing behavior: The reasoned action approach: Taylor & Francis; 2011.
  • 51. Glanz K, Rimer BK, Viswanath K. Health behavior and health education: theory, research, and practice: John Wiley & Sons; 2008.
  • 53. Principles for Digital Development. Principles for Digital Development 2022 [cited 2022 August 2022]. Available from: https://digitalprinciples.org/ .
  • 54. CHT Core Framework. Community Health Toolkit,. 4.5.0 ed: Medic; 2023.
  • 56. Malawi Ministry of Health. Clinical management of HIV in children and adults. Malawi Integrated Guidelines and Standard Operating Procedures for Providing HIV Services. Lilongwe: Ministry of Health, Malawi 2022.
  • 66. Statista. Number of mobile cellular subscriptions per 100 inhabitants in Malawi from 2000 to 2022 2023 [updated July 2023January 10, 2024]. Available from: https://www.statista.com/statistics/509554/mobile-cellular-subscriptions-per-100-inhabitants-in-malawi/ .
  • 68. UNICEF. Malawi Education Fact Sheet Geneva: UNICEF; 2022 [cited 2023 March 17]. Available from: https://data.unicef.org/wp-content/uploads/2022/12/2022Malawi_Factsheet_InDesign-FINAL-2.pdf .

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 7: Nonexperimental Research

Quasi-Experimental Research

Learning Objectives

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix  quasi  means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A  nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This design would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a  pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of  history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of  maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is  regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study  because  of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is  spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001) [2] . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952) [3] . But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate  without  receiving psychotherapy. This parallel suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here: Classics in the History of Psychology .

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980) [4] . They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Interrupted Time Series Design

A variant of the pretest-posttest design is the  interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this one is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979) [5] . Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.3 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of  Figure 7.3 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of  Figure 7.3 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Image description available

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does  not  receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve  more  than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this change in attitude could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.
  • regression to the mean
  • spontaneous remission

Image Descriptions

Figure 7.3 image description: Two line graphs charting the number of absences per week over 14 weeks. The first 7 weeks are without treatment and the last 7 weeks are with treatment. In the first line graph, there are between 4 to 8 absences each week. After the treatment, the absences drop to 0 to 3 each week, which suggests the treatment worked. In the second line graph, there is no noticeable change in the number of absences per week after the treatment, which suggests the treatment did not work. [Return to Figure 7.3]

  • Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin. ↵
  • Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146. ↵
  • Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324. ↵
  • Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press. ↵

A between-subjects design in which participants have not been randomly assigned to conditions.

The dependent variable is measured once before the treatment is implemented and once after it is implemented.

A category of alternative explanations for differences between scores such as events that happened between the pretest and posttest, unrelated to the study.

An alternative explanation that refers to how the participants might have changed between the pretest and posttest in ways that they were going to anyway because they are growing and learning.

The statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion.

The tendency for many medical and psychological problems to improve over time without any form of treatment.

A set of measurements taken at intervals over a period of time that are interrupted by a treatment.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

quasi experimental tagalog

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Instructions for Authors
  • BMJ Journals

You are here

  • Online First
  • Nutritional support clinical efficacy in tuberculosis: quasi-experimental study
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Hong Zhou 1 ,
  • Chuan Zhao 1 ,
  • Min Tan 1 ,
  • Li Shu 2 and
  • http://orcid.org/0009-0003-9857-9574 Feng Yang 1
  • 1 Department of Infectious Diseases , Suining Central Hospital , Suining , China
  • 2 Suining Central Hospital , Suining , Sichuan , China
  • Correspondence to Feng Yang; cnlyf520{at}sina.com

Objective This study aimed to investigate the impact of nutritional support on the clinical efficacy in hospitalised tuberculosis patients with nutritional risk.

Methods We selected a total of 266 eligible patients with tuberculosis for the experimental and 190 patients for control groups. The patients in intervention group received adjusted dietary structure, enteral nutrition via oral intake or gastric tube, total parenteral nutrition and combined enteral and parenteral nutrition. We recorded various factors, including age, sex, underlying disease, tuberculosis type, nutritional risk at admission, serum albumin (ALB), body mass index, complications during hospitalisation, nutritional support status, serum ALB before discharge and length of hospital stay.

Results The incidences of nutritional risk in the control and experimental groups were 64.41% and 64.72%, respectively, with no statistically significant differences in baseline characteristics. The occurrence rates of complications and secondary infections in the experimental group were 57.89% and 51.5%, respectively, which were significantly lower than the control group’s rates of 70.00% and 56.31%. These differences were statistically significant. The experimental group had a significantly shorter hospital stay (16.5±7.54 days) compared with the control group (19.55±7.33 days). Furthermore, the serum ALB levels of patients in the experimental group were higher on discharge than at admission.

Conclusion Hospitalised patients with tuberculosis often face a high incidence of nutritional risk. However, the implementation of standardised nutritional support treatment has shown promising results in improving the nutritional status of tuberculosis patients with nutritional risk. This approach not only helps reduce the occurrence of complications but also enhances short-term prognosis and improves overall clinical efficacy.

  • Chronic conditions
  • Clinical assessment
  • Hospital care
  • Respiratory conditions
  • Supportive care

Data availability statement

Data are available upon reasonable request.

https://doi.org/10.1136/spcare-2023-004608

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Contributors All authors contributed significantly in designing and writing the manuscript. YL acting as guarantor.

Funding Sichuan Medical Research Youth Innovation Project: Clinical Application of Nutrition Risk Screening in tuberculosis Patients, Q17077

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer-reviewed.

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Am Med Inform Assoc
  • v.13(1); Jan-Feb 2006

The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

Associated data.

Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes.

Quasi-experimental studies encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized controlled trial. Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. As another example, an informatics technology group is introducing a pharmacy order-entry system aimed at decreasing pharmacy costs. The intervention is implemented and pharmacy costs before and after the intervention are measured.

In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical informatics as well as in other medical disciplines. However, little is written about these study designs in the medical literature or in traditional epidemiology textbooks. 1 , 2 , 3 In contrast, the social sciences literature is replete with examples of ways to implement and improve quasi-experimental studies. 4 , 5 , 6

In this paper, we review the different pretest-posttest quasi-experimental study designs, their nomenclature, and the relative hierarchy of these designs with respect to their ability to establish causal associations between an intervention and an outcome. The example of a pharmacy order-entry system aimed at decreasing pharmacy costs will be used throughout this article to illustrate the different quasi-experimental designs. We discuss limitations of quasi-experimental designs and offer methods to improve them. We also perform a systematic review of four years of publications from two informatics journals to determine the number of quasi-experimental studies, classify these studies into their application domains, determine whether the potential limitations of quasi-experimental studies were acknowledged by the authors, and place these studies into the above-mentioned relative hierarchy.

The authors reviewed articles and book chapters on the design of quasi-experimental studies. 4 , 5 , 6 , 7 , 8 , 9 , 10 Most of the reviewed articles referenced two textbooks that were then reviewed in depth. 4 , 6

Key advantages and disadvantages of quasi-experimental studies, as they pertain to the study of medical informatics, were identified. The potential methodological flaws of quasi-experimental medical informatics studies, which have the potential to introduce bias, were also identified. In addition, a summary table outlining a relative hierarchy and nomenclature of quasi-experimental study designs is described. In general, the higher the design is in the hierarchy, the greater the internal validity that the study traditionally possesses because the evidence of the potential causation between the intervention and the outcome is strengthened. 4

We then performed a systematic review of four years of publications from two informatics journals. First, we determined the number of quasi-experimental studies. We then classified these studies on the above-mentioned hierarchy. We also classified the quasi-experimental studies according to their application domain. The categories of application domains employed were based on categorization used by Yearbooks of Medical Informatics 1992–2005 and were similar to the categories of application domains employed by Annual Symposiums of the American Medical Informatics Association. 11 The categories were (1) health and clinical management; (2) patient records; (3) health information systems; (4) medical signal processing and biomedical imaging; (5) decision support, knowledge representation, and management; (6) education and consumer informatics; and (7) bioinformatics. Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of lack of randomization, the potential for regression to the mean, the presence of temporal confounders and the mention of another design that would have more internal validity.

All original scientific manuscripts published between January 2000 and December 2003 in the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI) were reviewed. One author (ADH) reviewed all the papers to identify the number of quasi-experimental studies. Other authors (ADH, JCM, JF) then independently reviewed all the studies identified as quasi-experimental. The three authors then convened as a group to resolve any disagreements in study classification, application domain, and acknowledgment of limitations.

Results and Discussion

What is a quasi-experiment.

Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. Quasi-experimental studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups.

Using this basic definition, it is evident that many published studies in medical informatics utilize the quasi-experimental design. Although the randomized controlled trial is generally considered to have the highest level of credibility with regard to assessing causality, in medical informatics, researchers often choose not to randomize the intervention for one or more reasons: (1) ethical considerations, (2) difficulty of randomizing subjects, (3) difficulty to randomize by locations (e.g., by wards), (4) small available sample size. Each of these reasons is discussed below.

Ethical considerations typically will not allow random withholding of an intervention with known efficacy. Thus, if the efficacy of an intervention has not been established, a randomized controlled trial is the design of choice to determine efficacy. But if the intervention under study incorporates an accepted, well-established therapeutic intervention, or if the intervention has either questionable efficacy or safety based on previously conducted studies, then the ethical issues of randomizing patients are sometimes raised. In the area of medical informatics, it is often believed prior to an implementation that an informatics intervention will likely be beneficial and thus medical informaticians and hospital administrators are often reluctant to randomize medical informatics interventions. In addition, there is often pressure to implement the intervention quickly because of its believed efficacy, thus not allowing researchers sufficient time to plan a randomized trial.

For medical informatics interventions, it is often difficult to randomize the intervention to individual patients or to individual informatics users. So while this randomization is technically possible, it is underused and thus compromises the eventual strength of concluding that an informatics intervention resulted in an outcome. For example, randomly allowing only half of medical residents to use pharmacy order-entry software at a tertiary care hospital is a scenario that hospital administrators and informatics users may not agree to for numerous reasons.

Similarly, informatics interventions often cannot be randomized to individual locations. Using the pharmacy order-entry system example, it may be difficult to randomize use of the system to only certain locations in a hospital or portions of certain locations. For example, if the pharmacy order-entry system involves an educational component, then people may apply the knowledge learned to nonintervention wards, thereby potentially masking the true effect of the intervention. When a design using randomized locations is employed successfully, the locations may be different in other respects (confounding variables), and this further complicates the analysis and interpretation.

In situations where it is known that only a small sample size will be available to test the efficacy of an intervention, randomization may not be a viable option. Randomization is beneficial because on average it tends to evenly distribute both known and unknown confounding variables between the intervention and control group. However, when the sample size is small, randomization may not adequately accomplish this balance. Thus, alternative design and analytical methods are often used in place of randomization when only small sample sizes are available.

What Are the Threats to Establishing Causality When Using Quasi-experimental Designs in Medical Informatics?

The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention. Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing. These rival hypotheses, or alternative explanations, arise from principles of epidemiologic study design.

Shadish et al. 4 outline nine threats to internal validity that are outlined in ▶ . Internal validity is defined as the degree to which observed changes in outcomes can be correctly inferred to be caused by an exposure or an intervention. In quasi-experimental studies of medical informatics, we believe that the methodological principles that most often result in alternative explanations for the apparent causal effect include (a) difficulty in measuring or controlling for important confounding variables, particularly unmeasured confounding variables, which can be viewed as a subset of the selection threat in ▶ ; (b) results being explained by the statistical principle of regression to the mean . Each of these latter two principles is discussed in turn.

Threats to Internal Validity

1. Ambiguous temporal precedence: Lack of clarity about whether intervention occurred before outcome
2. Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect
3. History: Events occurring concurrently with intervention could cause the observed effect
4. Maturation: Naturally occurring changes over time could be confused with a treatment effect
5. Regression: When units are selected for their extreme scores, they will often have less extreme subsequent scores, an occurrence that can be confused with an intervention effect
6. Attrition: Loss of respondents can produce artifactual effects if that loss is correlated with intervention
7. Testing: Exposure to a test can affect scores on subsequent exposures to that test
8. Instrumentation: The nature of a measurement may change over time or conditions
9. Interactive effects: The impact of an intervention may depend on the level of another intervention

Adapted from Shadish et al. 4

An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable. For example, in a study aiming to demonstrate that the introduction of a pharmacy order-entry system led to lower pharmacy costs, there are a number of important potential confounding variables (e.g., severity of illness of the patients, knowledge and experience of the software users, other changes in hospital policy) that may have differed in the preintervention and postintervention time periods ( ▶ ). In a multivariable regression, the first confounding variable could be addressed with severity of illness measures, but the second confounding variable would be difficult if not nearly impossible to measure and control. In addition, potential confounding variables that are unmeasured or immeasurable cannot be controlled for in nonrandomized quasi-experimental study designs and can only be properly controlled by the randomization process in randomized controlled trials.

An external file that holds a picture, illustration, etc.
Object name is 16f01.jpg

Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.

Another important threat to establishing causality is regression to the mean. 12 , 13 , 14 This widespread statistical phenomenon can result in wrongly concluding that an effect is due to the intervention when in reality it is due to chance. The phenomenon was first described in 1886 by Francis Galton who measured the adult height of children and their parents. He noted that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents, and conversely, when the average height of the parents was shorter than the population mean, the children tended to be taller than their parents.

In medical informatics, what often triggers the development and implementation of an intervention is a rise in the rate above the mean or norm. For example, increasing pharmacy costs and adverse events may prompt hospital informatics personnel to design and implement pharmacy order-entry systems. If this rise in costs or adverse events is really just an extreme observation that is still within the normal range of the hospital's pharmaceutical costs (i.e., the mean pharmaceutical cost for the hospital has not shifted), then the statistical principle of regression to the mean predicts that these elevated rates will tend to decline even without intervention. However, often informatics personnel and hospital administrators cannot wait passively for this decline to occur. Therefore, hospital personnel often implement one or more interventions, and if a decline in the rate occurs, they may mistakenly conclude that the decline is causally related to the intervention. In fact, an alternative explanation for the finding could be regression to the mean.

What Are the Different Quasi-experimental Study Designs?

In the social sciences literature, quasi-experimental studies are divided into four study design groups 4 , 6 :

  • Quasi-experimental designs without control groups
  • Quasi-experimental designs that use control groups but no pretest
  • Quasi-experimental designs that use control groups and pretests
  • Interrupted time-series designs

There is a relative hierarchy within these categories of study designs, with category D studies being sounder than categories C, B, or A in terms of establishing causality. Thus, if feasible from a design and implementation point of view, investigators should aim to design studies that fall in to the higher rated categories. Shadish et al. 4 discuss 17 possible designs, with seven designs falling into category A, three designs in category B, and six designs in category C, and one major design in category D. In our review, we determined that most medical informatics quasi-experiments could be characterized by 11 of 17 designs, with six study designs in category A, one in category B, three designs in category C, and one design in category D because the other study designs were not used or feasible in the medical informatics literature. Thus, for simplicity, we have summarized the 11 study designs most relevant to medical informatics research in ▶ .

Relative Hierarchy of Quasi-experimental Designs

Quasi-experimental Study DesignsDesign Notation
A. Quasi-experimental designs without control groups
    1. The one-group posttest-only designX O1
    2. The one-group pretest-posttest designO1 X O2
    3. The one-group pretest-posttest design using a double pretestO1 O2 X O3
    4. The one-group pretest-posttest design using a nonequivalent dependent variable(O1a, O1b) X (O2a, O2b)
    5. The removed-treatment designO1 X O2 O3 removeX O4
    6. The repeated-treatment designO1 X O2 removeX O3 X O4
B. Quasi-experimental designs that use a control group but no pretest
    1. Posttest-only design with nonequivalent groupsIntervention group: X O1
Control group: O2
C. Quasi-experimental designs that use control groups and pretests
    1. Untreated control group with dependent pretest and posttest samplesIntervention group: O1a X O2a
Control group: O1b O2b
    2. Untreated control group design with dependent pretest and posttest samples using a double pretestIntervention group: O1a O2a X O3a
Control group: O1b O2b O3b
    3. Untreated control group design with dependent pretest and posttest samples using switching replicationsIntervention group: O1a X O2a O3a
Control group: O1b O2b X O3b
D. Interrupted time-series design
    1. Multiple pretest and posttest observations spaced at equal intervals of timeO1 O2 O3 O4 O5 X O6 O7 O8 O9 O10

O = Observational Measurement; X = Intervention Under Study. Time moves from left to right.

The nomenclature and relative hierarchy were used in the systematic review of four years of JAMIA and the IJMI. Similar to the relative hierarchy that exists in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series, the hierarchy in ▶ is not absolute in that in some cases, it may be infeasible to perform a higher level study. For example, there may be instances where an A6 design established stronger causality than a B1 design. 15 , 16 , 17

Quasi-experimental Designs without Control Groups

equation M1

Here, X is the intervention and O is the outcome variable (this notation is continued throughout the article). In this study design, an intervention (X) is implemented and a posttest observation (O1) is taken. For example, X could be the introduction of a pharmacy order-entry intervention and O1 could be the pharmacy costs following the intervention. This design is the weakest of the quasi-experimental designs that are discussed in this article. Without any pretest observations or a control group, there are multiple threats to internal validity. Unfortunately, this study design is often used in medical informatics when new software is introduced since it may be difficult to have pretest measurements due to time, technical, or cost constraints.

equation M2

This is a commonly used study design. A single pretest measurement is taken (O1), an intervention (X) is implemented, and a posttest measurement is taken (O2). In this instance, period O1 frequently serves as the “control” period. For example, O1 could be pharmacy costs prior to the intervention, X could be the introduction of a pharmacy order-entry system, and O2 could be the pharmacy costs following the intervention. Including a pretest provides some information about what the pharmacy costs would have been had the intervention not occurred.

equation M3

The advantage of this study design over A2 is that adding a second pretest prior to the intervention helps provide evidence that can be used to refute the phenomenon of regression to the mean and confounding as alternative explanations for any observed association between the intervention and the posttest outcome. For example, in a study where a pharmacy order-entry system led to lower pharmacy costs (O3 < O2 and O1), if one had two preintervention measurements of pharmacy costs (O1 and O2) and they were both elevated, this would suggest that there was a decreased likelihood that O3 is lower due to confounding and regression to the mean. Similarly, extending this study design by increasing the number of measurements postintervention could also help to provide evidence against confounding and regression to the mean as alternate explanations for observed associations.

equation M4

This design involves the inclusion of a nonequivalent dependent variable ( b ) in addition to the primary dependent variable ( a ). Variables a and b should assess similar constructs; that is, the two measures should be affected by similar factors and confounding variables except for the effect of the intervention. Variable a is expected to change because of the intervention X, whereas variable b is not. Taking our example, variable a could be pharmacy costs and variable b could be the length of stay of patients. If our informatics intervention is aimed at decreasing pharmacy costs, we would expect to observe a decrease in pharmacy costs but not in the average length of stay of patients. However, a number of important confounding variables, such as severity of illness and knowledge of software users, might affect both outcome measures. Thus, if the average length of stay did not change following the intervention but pharmacy costs did, then the data are more convincing than if just pharmacy costs were measured.

The Removed-Treatment Design

equation M5

This design adds a third posttest measurement (O3) to the one-group pretest-posttest design and then removes the intervention before a final measure (O4) is made. The advantage of this design is that it allows one to test hypotheses about the outcome in the presence of the intervention and in the absence of the intervention. Thus, if one predicts a decrease in the outcome between O1 and O2 (after implementation of the intervention), then one would predict an increase in the outcome between O3 and O4 (after removal of the intervention). One caveat is that if the intervention is thought to have persistent effects, then O4 needs to be measured after these effects are likely to have disappeared. For example, a study would be more convincing if it demonstrated that pharmacy costs decreased after pharmacy order-entry system introduction (O2 and O3 less than O1) and that when the order-entry system was removed or disabled, the costs increased (O4 greater than O2 and O3 and closer to O1). In addition, there are often ethical issues in this design in terms of removing an intervention that may be providing benefit.

The Repeated-Treatment Design

equation M6

The advantage of this design is that it demonstrates reproducibility of the association between the intervention and the outcome. For example, the association is more likely to be causal if one demonstrates that a pharmacy order-entry system results in decreased pharmacy costs when it is first introduced and again when it is reintroduced following an interruption of the intervention. As for design A5, the assumption must be made that the effect of the intervention is transient, which is most often applicable to medical informatics interventions. Because in this design, subjects may serve as their own controls, this may yield greater statistical efficiency with fewer numbers of subjects.

Quasi-experimental Designs That Use a Control Group but No Pretest

equation M7

An intervention X is implemented for one group and compared to a second group. The use of a comparison group helps prevent certain threats to validity including the ability to statistically adjust for confounding variables. Because in this study design, the two groups may not be equivalent (assignment to the groups is not by randomization), confounding may exist. For example, suppose that a pharmacy order-entry intervention was instituted in the medical intensive care unit (MICU) and not the surgical intensive care unit (SICU). O1 would be pharmacy costs in the MICU after the intervention and O2 would be pharmacy costs in the SICU after the intervention. The absence of a pretest makes it difficult to know whether a change has occurred in the MICU. Also, the absence of pretest measurements comparing the SICU to the MICU makes it difficult to know whether differences in O1 and O2 are due to the intervention or due to other differences in the two units (confounding variables).

Quasi-experimental Designs That Use Control Groups and Pretests

The reader should note that with all the studies in this category, the intervention is not randomized. The control groups chosen are comparison groups. Obtaining pretest measurements on both the intervention and control groups allows one to assess the initial comparability of the groups. The assumption is that if the intervention and the control groups are similar at the pretest, the smaller the likelihood there is of important confounding variables differing between the two groups.

equation M8

The use of both a pretest and a comparison group makes it easier to avoid certain threats to validity. However, because the two groups are nonequivalent (assignment to the groups is not by randomization), selection bias may exist. Selection bias exists when selection results in differences in unit characteristics between conditions that may be related to outcome differences. For example, suppose that a pharmacy order-entry intervention was instituted in the MICU and not the SICU. If preintervention pharmacy costs in the MICU (O1a) and SICU (O1b) are similar, it suggests that it is less likely that there are differences in the important confounding variables between the two units. If MICU postintervention costs (O2a) are less than preintervention MICU costs (O1a), but SICU costs (O1b) and (O2b) are similar, this suggests that the observed outcome may be causally related to the intervention.

equation M9

In this design, the pretests are administered at two different times. The main advantage of this design is that it controls for potentially different time-varying confounding effects in the intervention group and the comparison group. In our example, measuring points O1 and O2 would allow for the assessment of time-dependent changes in pharmacy costs, e.g., due to differences in experience of residents, preintervention between the intervention and control group, and whether these changes were similar or different.

equation M10

With this study design, the researcher administers an intervention at a later time to a group that initially served as a nonintervention control. The advantage of this design over design C2 is that it demonstrates reproducibility in two different settings. This study design is not limited to two groups; in fact, the study results have greater validity if the intervention effect is replicated in different groups at multiple times. In the example of a pharmacy order-entry system, one could implement or intervene in the MICU and then at a later time, intervene in the SICU. This latter design is often very applicable to medical informatics where new technology and new software is often introduced or made available gradually.

Interrupted Time-Series Designs

equation M11

An interrupted time-series design is one in which a string of consecutive observations equally spaced in time is interrupted by the imposition of a treatment or intervention. The advantage of this design is that with multiple measurements both pre- and postintervention, it is easier to address and control for confounding and regression to the mean. In addition, statistically, there is a more robust analytic capability, and there is the ability to detect changes in the slope or intercept as a result of the intervention in addition to a change in the mean values. 18 A change in intercept could represent an immediate effect while a change in slope could represent a gradual effect of the intervention on the outcome. In the example of a pharmacy order-entry system, O1 through O5 could represent monthly pharmacy costs preintervention and O6 through O10 monthly pharmacy costs post the introduction of the pharmacy order-entry system. Interrupted time-series designs also can be further strengthened by incorporating many of the design features previously mentioned in other categories (such as removal of the treatment, inclusion of a nondependent outcome variable, or the addition of a control group).

Systematic Review Results

The results of the systematic review are in ▶ . In the four-year period of JAMIA publications that the authors reviewed, 25 quasi-experimental studies among 22 articles were published. Of these 25, 15 studies were of category A, five studies were of category B, two studies were of category C, and no studies were of category D. Although there were no studies of category D (interrupted time-series analyses), three of the studies classified as category A had data collected that could have been analyzed as an interrupted time-series analysis. Nine of the 25 studies (36%) mentioned at least one of the potential limitations of the quasi-experimental study design. In the four-year period of IJMI publications reviewed by the authors, nine quasi-experimental studies among eight manuscripts were published. Of these nine, five studies were of category A, one of category B, one of category C, and two of category D. Two of the nine studies (22%) mentioned at least one of the potential limitations of the quasi-experimental study design.

Systematic Review of Four Years of Quasi-designs in JAMIA

StudyJournalInformatics Topic CategoryQuasi-experimental DesignLimitation of Quasi-design Mentioned in Article
Staggers and Kobus JAMIA1Counterbalanced study designYes
Schriger et al. JAMIA1A5Yes
Patel et al. JAMIA2A5 (study 1, phase 1)No
Patel et al. JAMIA2A2 (study 1, phase 2)No
Borowitz JAMIA1A2No
Patterson and Harasym JAMIA6C1Yes
Rocha et al. JAMIA5A2Yes
Lovis et al. JAMIA1Counterbalanced study designNo
Hersh et al. JAMIA6B1No
Makoul et al. JAMIA2B1Yes
Ruland JAMIA3B1No
DeLusignan et al. JAMIA1A1No
Mekhjian et al. JAMIA1A2 (study design 1)Yes
Mekhjian et al. JAMIA1B1 (study design 2)Yes
Ammenwerth et al. JAMIA1A2No
Oniki et al. JAMIA5C1Yes
Liederman and Morefield JAMIA1A1 (study 1)No
Liederman and Morefield JAMIA1A2 (study 2)No
Rotich et al. JAMIA2A2 No
Payne et al. JAMIA1A1No
Hoch et al. JAMIA3A2 No
Laerum et al. JAMIA1B1Yes
Devine et al. JAMIA1Counterbalanced study design
Dunbar et al. JAMIA6A1
Lenert et al. JAMIA6A2
Koide et al. IJMI5D4No
Gonzalez-Hendrich et al. IJMI2A1No
Anantharaman and Swee Han IJMI3B1No
Chae et al. IJMI6A2No
Lin et al. IJMI3A1No
Mikulich et al. IJMI1A2Yes
Hwang et al. IJMI1A2Yes
Park et al. IJMI1C2No
Park et al. IJMI1D4No

JAMIA = Journal of the American Medical Informatics Association; IJMI = International Journal of Medical Informatics.

In addition, three studies from JAMIA were based on a counterbalanced design. A counterbalanced design is a higher order study design than other studies in category A. The counterbalanced design is sometimes referred to as a Latin-square arrangement. In this design, all subjects receive all the different interventions but the order of intervention assignment is not random. 19 This design can only be used when the intervention is compared against some existing standard, for example, if a new PDA-based order entry system is to be compared to a computer terminal–based order entry system. In this design, all subjects receive the new PDA-based order entry system and the old computer terminal-based order entry system. The counterbalanced design is a within-participants design, where the order of the intervention is varied (e.g., one group is given software A followed by software B and another group is given software B followed by software A). The counterbalanced design is typically used when the available sample size is small, thus preventing the use of randomization. This design also allows investigators to study the potential effect of ordering of the informatics intervention.

Although quasi-experimental study designs are ubiquitous in the medical informatics literature, as evidenced by 34 studies in the past four years of the two informatics journals, little has been written about the benefits and limitations of the quasi-experimental approach. As we have outlined in this paper, a relative hierarchy and nomenclature of quasi-experimental study designs exist, with some designs being more likely than others to permit causal interpretations of observed associations. Strengths and limitations of a particular study design should be discussed when presenting data collected in the setting of a quasi-experimental study. Future medical informatics investigators should choose the strongest design that is feasible given the particular circumstances.

Supplementary Material

Dr. Harris was supported by NIH grants K23 AI01752-01A1 and R01 AI60859-01A1. Dr. Perencevich was supported by a VA Health Services Research and Development Service (HSR&D) Research Career Development Award (RCD-02026-1). Dr. Finkelstein was supported by NIH grant RO1 HL71690.

Experimental vs Quasi-Experimental Design: Which to Choose?

Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:

 Experimental Study (a.k.a. Randomized Controlled Trial)Quasi-Experimental Study
ObjectiveEvaluate the effect of an intervention or a treatmentEvaluate the effect of an intervention or a treatment
How participants get assigned to groups?Random assignmentNon-random assignment (participants get assigned according to their choosing or that of the researcher)
Is there a control group?YesNot always (although, if present, a control group will provide better evidence for the study results)
Is there any room for confounding?No (although check for a detailed discussion on post-randomization confounding in randomized controlled trials)Yes (however, statistical techniques can be used to study causal relationships in quasi-experiments)
Level of evidenceA randomized trial is at the highest level in the hierarchy of evidenceA quasi-experiment is one level below the experimental study in the hierarchy of evidence [ ]
AdvantagesMinimizes bias and confounding– Can be used in situations where an experiment is not ethically or practically feasible
– Can work with smaller sample sizes than randomized trials
Limitations– High cost (as it generally requires a large sample size)
– Ethical limitations
– Generalizability issues
– Sometimes practically infeasible
Lower ranking in the hierarchy of evidence as losing the power of randomization causes the study to be more susceptible to bias and confounding

What is a quasi-experimental design?

A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.

Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.

Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.

(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .

Examples of quasi-experimental designs include:

  • One-Group Posttest Only Design
  • Static-Group Comparison Design
  • One-Group Pretest-Posttest Design
  • Separate-Sample Pretest-Posttest Design

What is an experimental design?

An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:

  • A treatment group: where participants receive the new intervention which effect we want to study.
  • A control or comparison group: where participants do not receive any intervention at all (or receive some standard intervention).

Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.

(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).

Examples of experimental designs include:

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Solomon Four-Group Design
  • Matched Pairs Design
  • Randomized Block Design

When to choose an experimental design over a quasi-experimental design?

Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.

Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.

So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?

This is what we’re going to discuss next.

When to choose a quasi-experimental design over a true experiment?

The issue with randomness is that it cannot be always achievable.

So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:

  • If being in one group is believed to be harmful for the participants , either because the intervention is harmful (ex. randomizing people to smoking), or the intervention has a questionable efficacy, or on the contrary it is believed to be so beneficial that it would be malevolent to put people in the control group (ex. randomizing people to receiving an operation).
  • In cases where interventions act on a group of people in a given location , it becomes difficult to adequately randomize subjects (ex. an intervention that reduces pollution in a given area).
  • When working with small sample sizes , as randomized controlled trials require a large sample size to account for heterogeneity among subjects (i.e. to evenly distribute confounding variables between the intervention and control groups).

Further reading

  • Statistical Software Popularity in 40,582 Research Papers
  • Checking the Popularity of 125 Statistical Tests and Models
  • Objectives of Epidemiology (With Examples)
  • 12 Famous Epidemiologists and Why
  • Title & authors

Shalini, Shalini, and Sundari Apte. "A Quasi-experimental Study to Assess the Effectiveness of Planned Teaching Programme on Knowledge Regarding the Electroconvulsive Therapy Among Family Members of Patients with Mental Illness." International journal of health sciences , vol. 6, no. S8, 2022, pp. 4235-4243, doi: 10.53730/ijhs.v6nS8.13142 .

Download citation file:

A Quasi\u002Dexperimental Study to Assess the Effectiveness of Planned Teaching Programme on Knowledge Regarding the Electroconvulsive Therapy Among Family Members of Patients with Mental Illness Image

Effectiveness of planned teaching programme on knowledge regarding the Electroconvulsive therapy among family members of patients with mental illness Objectives were to assess the pre-test knowledge level about electroconvulsive therapy among family members of patients with mental illness in both control and experimental group; To assess the post-test knowledge level about electroconvulsive therapy among family members of patients with mental illness in both control and experimental group; To determine the effectiveness of planned teaching programme on electroconvulsive therapy among family members of patients with mental illness. And to associate pre-test knowledge level on electroconvulsive therapy among family members of patient with mental illness with selected demographic variables. Methodology: The study was quantitative in approach and the design was Quasi-Experimental with pre test post test control group. The sample size was 60 each in experimental &amp; control. Family members of mentally ill patients are population for this study. A Non-probability purposive sampling technique was used to select the samples. Data was collected by using the tools which consisted of demographic data sheet and Knowledge score on electroconvulsive therapy which consisted of 30 questionnaires.

Prevalence of Musculoskeletal Disorders (MSD) and Smartphone Addictions Among University Students in Malaysia Image

Table of contents

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Quasi-Experimental Research

Learning objectives.

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001). Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952). But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:

http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980). They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Han Eysenck

In a classic 1952 article, researcher Hans Eysenck pointed out the shortcomings of the simple pretest-posttest design for evaluating the effectiveness of psychotherapy.

Wikimedia Commons – CC BY-SA 3.0.

Interrupted Time Series Design

A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979). Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Figure 7.5 A Hypothetical Interrupted Time-Series Design

A Hypothetical Interrupted Time-Series Design - The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not

The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two college professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.

Discussion: Imagine that a group of obese children is recruited for a study in which their weight is measured, then they participate for 3 months in a program that encourages them to be more active, and finally their weight is measured again. Explain how each of the following might affect the results:

  • regression to the mean
  • spontaneous remission

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324.

Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146.

Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

22 August 2024: Due to technical disruption, we are experiencing some delays to publication. We are working to restore services and apologise for the inconvenience. For further updates please visit our website: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

quasi experimental tagalog

  • > Journals
  • > Prehospital and Disaster Medicine
  • > Volume 34 Issue 6
  • > Quasi-Experimental Design (Pre-Test and Post-Test Studies)...

quasi experimental tagalog

Article contents

Quasi-experimental design (pre-test and post-test studies) in prehospital and disaster research.

Published online by Cambridge University Press:  26 November 2019

This article is another in a series that discusses research methods frequently used in prehospital and disaster research. A common type of submission to Prehospital and Disaster Medicine is research based on a pre-test and post-test evaluation of an education curriculum, triage scheme, or simulation training method. This is particularly true of studies comparing or proposing validation of mass-casualty triage algorithms.

Pre-test and post-test research is one of many forms of quasi-experimental design. The term “quasi” means resembling experimental research, but does not imply that the quasi-experimental method is true experimental research. An example of quasi-experimental design is the testing of a new mass-casualty triage system by selecting a group of Emergency Medical Services (EMS) personnel and first having the group participate in a pre-test session based on triage scenarios, participate in a training for a new triage method, and then compare post-test results with pre-test scores. If post-test scores are above the pre-test scores, one assumes the triage training was successful.

Pre-test and post-test design is also used in evaluations of participants attitudes or perceptions relative to an event or to assess comfort in applying the information presented in a training session or with introduction of new concept (acceptance and efficacy study). One would assume that an increase in knowledge or positive attitude that is evident in better scoring on a post-test compared to a pre-test implies better knowledge or perception relative to an intervention applied after the pre-test.

An advantage of a pre-test and post-test study design is that there is a directionality of the research, meaning there is testing of a dependent variable (knowledge or attitude) before and after intervention with an independent variable (training or an information presentation session). This appears to be similar to classic experimental design, yet because participants in the study are most often not randomly assigned, quasi-experimental design is also a correlation (non-experimental) design. Because quasi-experimental research is not truly experimental in design, outcome causality cannot be determined, rather associations between interventions and outcomes are made.

As far back as the 18th Century, pre-test and post-test research methods have been used in many fields, including medicine-nursing, health, mental health, and education. The method has remained in common use because it is a rapid, convenient method to assess a target group to which an intervention has been applied. The literature base is rich with pre-test/post-test studies, which allows for comparison of these studies and meta-analysis of previously published work of this form. Pre-test and post-test evaluation also allows for immediate assessment of an intervention (such as a simulation session) and provides a means for rapid refinement of instructor teaching or simulation technique. In addition to being a convenient research method, pre-test and post-test design allows for statistical analysis of data using established statistical methods.

Pre-test and post-test design based on purposeful sampling allows for assessment of specific representatives of a population of interest, but not of the population as a whole. For example, if one wishes to evaluate the effect of a simulation session on the knowledge of a disaster Emergency Medical Team, that team can be included as the participants in a simulation exercise in which a pre-test and a post-test is used to evaluate results. But, the results from such an evaluation are only valid for the Team tested and not other Emergency Medical Teams.

In the 1960s, the validity of quasi-experimental design came into question with a number of papers published that evaluated the various forms of this type of research. Since that time, limitations of pre-post-test study design have been identified. As noted above, the participants in these types of studies are rarely selected by random sampling and represent a convenience or purposeful sample. The lack of a randomized recruitment of participants represents non-probalistic sampling, and therefore, results of such a study can only be applied to the participants and not a general target population. The use of testing, in itself, may add bias to a study. A pre-test will likely sensitize those taking it to the test itself and alert participants to the limited material required to score better on a post-test rather than acquiring adequate general knowledge for the subject of interest. This is a particular problem when the pre-test and post-test are the same or similar. Pre-testing also allows for participants to become more familiar with terminology and allows for ease in taking and scoring higher on a post-test. Another limitation of pre-test and post-test design is the phenomenon of statistical regression or the tendency of a group to move to a common mean as an artifact of repeated testing. In other words, those that scored poorly on pre-testing have nowhere to go but up in score and those that scored high in pre-testing have nowhere to go but down in score of the post-test. Other limitations of pre-test and post-test design include knowledge or attitude “decay,” or changes in retaining information or skills that occur with time. For example, a knowledge-based pre-test and post-test study may show good initial results, but without application of the knowledge gained, concepts will be lost (forgotten) with time unless applied on a frequent basis. This is an even greater problem for attitude assessment pre-test and post-test studies in which attitudes can change rapidly based on personal experience and external stimulants (media, social interactions) with loss of positive results of an intervention over time.

There are a number of methods that can be used to improve validity of pre-test and post-test study designs. One obvious strategy is to select a target group (for example paramedics in a system) and randomly select a group of study participants and randomly select a group of controls. Both the study group and control group would then take the pre-test and post-test at the same time interval, with only the study group receiving the intervention (example, a simulation session). Comparing testing scores for the study and control group addresses some limitations inherent in testing validity. Another method to improve validity is to design a study with a pre-test, immediate post-test, and later post-test (usually six months following the intervention) to allow for consideration of learning or attitude decay and on-going external stimulation. Using different questions relative to general knowledge acquisition or attitude on a pre-test and post-test will also improve validity. Important is that both the pre-test and post-test are validated for showing accuracy in measuring the outcomes of interest prior to being used in the study. Tests should be scored consistently, preferably by a non-biased scorer (grader) who ideally is blinded to the participants for whom the tests apply and is not one who designed or organized the intervention session.

Finally, application of statistical test for evaluation of pre-test and post-test results should be appropriate. Essential is the use of 25% and 75% quartiles for ordinal data medians (such as Likert Scale data) and 95% Confidence Intervals for means and proportions. While probability statistics such as t-Tests and Chi-square analysis may show statistical significance, overlapping of the ranges in measures of central tendency (confidence intervals or quartiles) of the mean or median show a lack of clinical significance and poor practical application for research results.

In summary, quasi-experimental design has been a common research method used for centuries. Pre-test and post-test design is a form of quasi-experimental research that allows for uncomplicated assessment of an intervention applied to a group of study participants. Validity of pre-test and post-test studies is difficult to achieve as the research design has inherent flaws, but strategies such as use of randomization, limiting internal and external bias, and appropriate application of basic statistics allow a researcher to make associations in outcome measures with this popular study design.

Crossref logo

This article has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 34, Issue 6
  • Samuel J. Stratton
  • DOI: https://doi.org/10.1017/S1049023X19005053

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

IMAGES

  1. What Is Quasi Experimental Research

    quasi experimental tagalog

  2. Quasi Experimental Research Design

    quasi experimental tagalog

  3. Quasi-experimental Research: What It Is, Types & Examples

    quasi experimental tagalog

  4. Advantages Of Quasi Experimental Research

    quasi experimental tagalog

  5. The Quasi Experimental Research Design S Conceptual F

    quasi experimental tagalog

  6. Experimental Study Design: Types, Methods, Advantages

    quasi experimental tagalog

VIDEO

  1. Quasi Experimental design

  2. AI for Growth: A Proposed Quasi-Experimental Research Study (Using ITS Design)

  3. Chapter 12: Quasi Experimental Research Designs

  4. Types of Quasi Experimental Research Design

  5. Chapter 4: Experimental & Quasi-Experimental Research

  6. Quasi experimental research design|3rd yr bsc nursing #notes #nursing #research

COMMENTS

  1. Experimental Research Design

    Oh eto na ung experimental research design na tagalog at sobrang easy at basic intindihin. MAHABA PERO MADALI LANG. Here are the topics:TOPICS IN THE VIDEO:T...

  2. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  3. Ano nga ba amg quasi experiment? (Tagalog please)

    Answer. Answer: Ang mga quasi-experiment ay sumasailalim sa mga alalahanin tungkol sa panloob na bisa, dahil ang mga grupo ng paggamot at kontrol ay maaaring hindi maihambing sa baseline. Sa random na takdang-aralin, ang mga kalahok sa pag-aaral ay may parehong pagkakataon na italaga sa grupo ng interbensyon o sa pangkat ng paghahambing.

  4. [Tagalog] Writing Chapter 3 Research Design with Example

    Sa video na ito, we discussed the different research designs or research methodologies. Inexplain natin ang pagkakaiba-iba ng descriptive research, experimen...

  5. Types of Experimental Research in Filipino

    Observation method- https://youtu.be/cvo8k3dYJXg Interview Method- https://youtu.be/3NIRYO2rB-8 Qualitative and Quantitative Research- https://youtu.be/uZwoO...

  6. Quasi-Experimental Research Design

    Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable (s) that is available in a true experimental design. In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to ...

  7. The Use of Two Media of Instruction in Biology: A Quasi-Experimental Study

    This paper determined the effectiveness of the two media of instruction, English and Filipino, in selected topics in Biology using quasi-experimental research. Two sections of Grade 8 students were the respondents of this study. The researchers found

  8. Quasi-Experimental Design: Types, Examples, Pros, and Cons

    See why leading organizations rely on MasterClass for learning & development. A quasi-experimental design can be a great option when ethical or practical concerns make true experiments impossible, but the research methodology does have its drawbacks. Learn all the ins and outs of a quasi-experimental design.

  9. Quasi-Experimental Design

    Quasi-Experimental Research Designs by Bruce A. Thyer. This pocket guide describes the logic, design, and conduct of the range of quasi-experimental designs, encompassing pre-experiments, quasi-experiments making use of a control or comparison group, and time-series designs. An introductory chapter describes the valuable role these types of ...

  10. The effect of proactive, interactive, two-way texting on 12-month

    A quasi-experimental design was used to assess the effect of 2wT on 12-month ART retention by comparing retention among a cohort of 2wT participants to that among a matched historical comparison cohort receiving SoC at MCP one year prior to 2wT implementation. The intervention cohort was matched 1:1 on age (bands of 5 years), sex, and WHO stage ...

  11. Quasi-Experimental Research

    The prefix quasi means "resembling." Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable ...

  12. Nutritional support clinical efficacy in tuberculosis: quasi

    General information. This study was a quasi-experimental study, which included a total of 456 tuberculosis patients with nutritional risk. A total of 266 patients with tuberculosis with nutritional risk, who were admitted to the Infectious Diseases Department of Suining Central Hospital between 1 July 2018 and 31 March 2019, were identified as the experimental group.

  13. (PDF) Experimental and quasi-experimental designs

    Experimental and quasi-experimental research designs examine whether there is a causal. relationship between independent and dependent variables. Simply de ned, the independent. variable is the ...

  14. (PDF) Towards an Experimental Turn in Filipino ...

    experimental philosophy. Sometimes called x-phi, experimental philosophy. is a new movement in contemporary analytic phil osophy that makes use of. empirical methods, especially experimental ...

  15. Implementation of a structured oral hygiene program through nursing

    A quasi-experimental pre-post design was used to evaluate outcomes before and after implementation of a structured oral hygiene education program. Methods. A structured oral hygiene program was developed and implemented in a large quaternary hospital. Change in NA knowledge, attitudes, and behaviors before and after implementation of the oral ...

  16. Quasi-Experimental Design

    One of the most common types of quasi-experimental design is the nonequivalent group's design. In a true experiment, it is important to have equivalent control and experimental groups, as this ...

  17. The Use and Interpretation of Quasi-Experimental Studies in Medical

    In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical ...

  18. Experimental vs Quasi-Experimental Design: Which to Choose?

    A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment. Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn't is not randomized.

  19. ANO ANG EXPERIMENTAL RESEARCH (with sample SOP, Paradigm and ...

    About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features NFL Sunday Ticket Press Copyright ...

  20. (PDF) Quasi-Experimental Research Designs

    Abstract and Figures. Quasi-experimental research designs are the most widely used research approach employed to evaluate the outcomes of social work programs and policies. This new volume ...

  21. A Quasi-experimental Study to Assess the Effectiveness of ...

    Methodology: The study was quantitative in approach and the design was Quasi-Experimental with pre test post test control group. The sample size was 60 each in experimental &amp; control. Family members of mentally ill patients are population for this study. A Non-probability purposive sampling technique was used to select the samples.

  22. 7.3 Quasi-Experimental Research

    Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one. The prefix quasi means "resembling.". Thus quasi-experimental research is research that resembles experimental research but is not true experimental research.

  23. Quasi-Experimental Design (Pre-Test and Post-Test Studies) in

    An example of quasi-experimental design is the testing of a new mass-casualty triage system by selecting a group of Emergency Medical Services (EMS) personnel and first having the group participate in a pre-test session based on triage scenarios, participate in a training for a new triage method, and then compare post-test results with pre-test ...

  24. Use of Quasi-Experimental Research Designs in Education Research

    In the past few decades, we have seen a rapid proliferation in the use of quasi-experimental research designs in education research. This trend, stemming in part from the "credibility revolution" in the social sciences, particularly economics, is notable along with the increasing use of randomized controlled trials in the strive toward rigorous causal inference.

  25. RESEARCH: Quasi Experimental Research Design

    RESEARCH: Quasi Experimental Research Design. By this video you'll learn what quasi experimental, when, why, and how to apply seen by its type. English Simpl...

  26. 7.3 Quasi-Experimental Research

    By David Elwin Lewis, PhDThis video details quasi-experimental research. Topics include the meaning of the prefix "quasi", quasi-independent variables, noneq...