• En español – ExME
  • Em português – EME

An introduction to different types of study design

Posted on 6th April 2021 by Hadi Abbas

""

Study designs are the set of methods and procedures used to collect and analyze data in a study.

Broadly speaking, there are 2 types of study designs: descriptive studies and analytical studies.

Descriptive studies

  • Describes specific characteristics in a population of interest
  • The most common forms are case reports and case series
  • In a case report, we discuss our experience with the patient’s symptoms, signs, diagnosis, and treatment
  • In a case series, several patients with similar experiences are grouped.

Analytical Studies

Analytical studies are of 2 types: observational and experimental.

Observational studies are studies that we conduct without any intervention or experiment. In those studies, we purely observe the outcomes.  On the other hand, in experimental studies, we conduct experiments and interventions.

Observational studies

Observational studies include many subtypes. Below, I will discuss the most common designs.

Cross-sectional study:

  • This design is transverse where we take a specific sample at a specific time without any follow-up
  • It allows us to calculate the frequency of disease ( p revalence ) or the frequency of a risk factor
  • This design is easy to conduct
  • For example – if we want to know the prevalence of migraine in a population, we can conduct a cross-sectional study whereby we take a sample from the population and calculate the number of patients with migraine headaches.

Cohort study:

  • We conduct this study by comparing two samples from the population: one sample with a risk factor while the other lacks this risk factor
  • It shows us the risk of developing the disease in individuals with the risk factor compared to those without the risk factor ( RR = relative risk )
  • Prospective : we follow the individuals in the future to know who will develop the disease
  • Retrospective : we look to the past to know who developed the disease (e.g. using medical records)
  • This design is the strongest among the observational studies
  • For example – to find out the relative risk of developing chronic obstructive pulmonary disease (COPD) among smokers, we take a sample including smokers and non-smokers. Then, we calculate the number of individuals with COPD among both.

Case-Control Study:

  • We conduct this study by comparing 2 groups: one group with the disease (cases) and another group without the disease (controls)
  • This design is always retrospective
  •  We aim to find out the odds of having a risk factor or an exposure if an individual has a specific disease (Odds ratio)
  •  Relatively easy to conduct
  • For example – we want to study the odds of being a smoker among hypertensive patients compared to normotensive ones. To do so, we choose a group of patients diagnosed with hypertension and another group that serves as the control (normal blood pressure). Then we study their smoking history to find out if there is a correlation.

Experimental Studies

  • Also known as interventional studies
  • Can involve animals and humans
  • Pre-clinical trials involve animals
  • Clinical trials are experimental studies involving humans
  • In clinical trials, we study the effect of an intervention compared to another intervention or placebo. As an example, I have listed the four phases of a drug trial:

I:  We aim to assess the safety of the drug ( is it safe ? )

II: We aim to assess the efficacy of the drug ( does it work ? )

III: We want to know if this drug is better than the old treatment ( is it better ? )

IV: We follow-up to detect long-term side effects ( can it stay in the market ? )

  • In randomized controlled trials, one group of participants receives the control, while the other receives the tested drug/intervention. Those studies are the best way to evaluate the efficacy of a treatment.

Finally, the figure below will help you with your understanding of different types of study designs.

A visual diagram describing the following. Two types of epidemiological studies are descriptive and analytical. Types of descriptive studies are case reports, case series, descriptive surveys. Types of analytical studies are observational or experimental. Observational studies can be cross-sectional, case-control or cohort studies. Types of experimental studies can be lab trials or field trials.

References (pdf)

You may also be interested in the following blogs for further reading:

An introduction to randomized controlled trials

Case-control and cohort studies: a brief overview

Cohort studies: prospective and retrospective designs

Prevalence vs Incidence: what is the difference?

' src=

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on An introduction to different types of study design

' src=

you are amazing one!! if I get you I’m working with you! I’m student from Ethiopian higher education. health sciences student

' src=

Very informative and easy understandable

' src=

You are my kind of doctor. Do not lose sight of your objective.

' src=

Wow very erll explained and easy to understand

' src=

I’m Khamisu Habibu community health officer student from Abubakar Tafawa Balewa university teaching hospital Bauchi, Nigeria, I really appreciate your write up and you have make it clear for the learner. thank you

' src=

well understood,thank you so much

' src=

Well understood…thanks

' src=

Simply explained. Thank You.

' src=

Thanks a lot for this nice informative article which help me to understand different study designs that I felt difficult before

' src=

That’s lovely to hear, Mona, thank you for letting the author know how useful this was. If there are any other particular topics you think would be useful to you, and are not already on the website, please do let us know.

' src=

it is very informative and useful.

thank you statistician

Fabulous to hear, thank you John.

' src=

Thanks for this information

Thanks so much for this information….I have clearly known the types of study design Thanks

That’s so good to hear, Mirembe, thank you for letting the author know.

' src=

Very helpful article!! U have simplified everything for easy understanding

' src=

I’m a health science major currently taking statistics for health care workers…this is a challenging class…thanks for the simified feedback.

That’s good to hear this has helped you. Hopefully you will find some of the other blogs useful too. If you see any topics that are missing from the website, please do let us know!

' src=

Hello. I liked your presentation, the fact that you ranked them clearly is very helpful to understand for people like me who is a novelist researcher. However, I was expecting to read much more about the Experimental studies. So please direct me if you already have or will one day. Thank you

Dear Ay. My sincere apologies for not responding to your comment sooner. You may find it useful to filter the blogs by the topic of ‘Study design and research methods’ – here is a link to that filter: https://s4be.cochrane.org/blog/topic/study-design/ This will cover more detail about experimental studies. Or have a look on our library page for further resources there – you’ll find that on the ‘Resources’ drop down from the home page.

However, if there are specific things you feel you would like to learn about experimental studies, that are missing from the website, it would be great if you could let me know too. Thank you, and best of luck. Emma

' src=

Great job Mr Hadi. I advise you to prepare and study for the Australian Medical Board Exams as soon as you finish your undergrad study in Lebanon. Good luck and hope we can meet sometime in the future. Regards ;)

' src=

You have give a good explaination of what am looking for. However, references am not sure of where to get them from.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Cluster Randomized Trials: Concepts

This blog summarizes the concepts of cluster randomization, and the logistical and statistical considerations while designing a cluster randomized controlled trial.

""

Expertise-based Randomized Controlled Trials

This blog summarizes the concepts of Expertise-based randomized controlled trials with a focus on the advantages and challenges associated with this type of study.

experimental design in medical research

A well-designed cohort study can provide powerful results. This blog introduces prospective and retrospective cohort studies, discussing the advantages, disadvantages and use of these type of study designs.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental design in medical research

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved June 18, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Design of Experimental Studies in Biomedical Sciences

  • First Online: 06 February 2020

Cite this chapter

experimental design in medical research

  • Bagher Larijani   ORCID: orcid.org/0000-0001-5386-7597 4 ,
  • Akram Tayanloo-Beik   ORCID: orcid.org/0000-0001-8370-9557 5 ,
  • Moloud Payab   ORCID: orcid.org/0000-0002-9311-8395 6 ,
  • Mahdi Gholami 7 ,
  • Motahareh Sheikh-Hosseini 8 &
  • Mehran Nematizadeh 8  

Part of the book series: Learning Materials in Biosciences ((LMB))

1264 Accesses

1 Altmetric

Proposing, investigating, and testing new theories lead to a considerable progress in science. In this concept, appropriate experimental design has a fundamental importance. Well-designed experimental studies with considering key points which are appropriately analyzed and reported could maximize scientific gains. However, there are still some problems in this process that should be addressed. Accordingly, using adequate methods based on the aim of experiments, qualifying the laboratory tests, pointing reproducibility of results, considering statistics for data analysis, reporting data transparently, and conducting pilot studies are fundamental considerations that will be discussed in this chapter. This chapter begins with a brief definition of experimental studies and different types of research in biosciences. Consequently, general principles of the experimental study design and conduct will be reviewed. Thereafter, some limitations and challenges in the field of experimental studies and also validation and standardization will be described.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

experimental design in medical research

How to Read the Literature, Develop a Hypothesis, and Design an Experiment for Basic Science and Translational Research

experimental design in medical research

An Introduction to Biostatistics

experimental design in medical research

Basic Study Design

Johnson PD, Besselsen DG. Practical aspects of experimental design in animal research. ILAR J. 2002;43(4):202–6.

Article   CAS   Google Scholar  

Festing MFW, Overend P, Cortina Borja M, Berdoy M. The design of animal experiments: reducing the use of animals in research through better experimental design. Revised and updated ed. London, UK: Sage Publications Ltd; 2016.

Google Scholar  

Kilkenny C, Parsons N, Kadyszewski E, Festing MF, Cuthill IC, Fry D, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS One. 2009;4(11):e7824.

Article   Google Scholar  

Röhrig B, du Prel J-B, Wachtlin D, Blettner M. Types of study in medical research: part 3 of a series on evaluation of scientific publications. Dtsch Arztebl Int. 2009;106(15):262.

PubMed   PubMed Central   Google Scholar  

Parker RM, Browne WJ. The place of experimental design and statistics in the 3Rs. ILAR J. 2014;55(3):477–85.

Festing MF, Altman DG. Guidelines for the design and statistical analysis of experiments using laboratory animals. ILAR J. 2002;43(4):244–58.

Thorndike E. Animal intelligence: experimental studies. London: Routledge; 2017.

Book   Google Scholar  

Creswell JW. Steps in Conducting a Scholarly Mixed Methods Study. DBER Speaker Series; 2013. 48.

Roy RK. Design of experiments using the Taguchi approach: 16 steps to product and process improvement. New York: Wiley; 2001.

Yu C-H, Ohlund B. Threats to validity of research design. 2010. Retrieved January 12, 2012.

Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG. Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments. PLoS Med. 2013;10(7):e1001489.

Christensen LB, Johnson B, Turner LA. Research methods, design, and analysis. Boston: Pearson; 2011.

Ferguson L. External validity, generalizability, and knowledge utilization. J Nurs Scholarsh. 2004;36(1):16–22.

Brewer MB, Crano WD. Research design and issues of validity. In: Handbook of research methods in social and personality psychology. Cambridge: Cambridge University Press; 2000. p. 3–16.

Further Reading

Bernard HR, Bernard HR. Social research methods: qualitative and quantitative approaches. Thousand Oaks: Sage; 2013.

Fraenkel JR, Wallen NE, Hyun HH. How to design and evaluate research in education. New York: McGraw-Hill Humanities/Social Sciences/Languages; 2011.

Gall MD, Borg WR, Gall JP. Educational research: an introduction. New York: Longman Publishing; 1996.

Keppel G. Design and analysis: a researcher’s handbook. Englewood Cliffs: Prentice-Hall, Inc; 1991.

Punch KF. Introduction to social research: quantitative and qualitative approaches. London: Sage; 2013.

Fan J. Local polynomial modelling and its applications: monographs on statistics and applied probability 66. Boca Raton: Routledge; 2018.

Festing MF, Altman DG. Guidelines for the design and statistical analysis of experiments using laboratory animals. ILAR J. 2002;43(4):244–58. https://doi.org/10.1093/ilar.43.4.244 .

Article   CAS   PubMed   Google Scholar  

Quinn GP, Keough MJ. Experimental design and data analysis for biologists. Cambridge: Cambridge University Press; 2002.

Download references

Author information

Authors and affiliations.

Endocrinology and Metabolism Research Center, Endocrinology and Metabolism Clinical Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran

Bagher Larijani

Cell Therapy and Regenerative Medicine Research Center, Endocrinology and Metabolism Molecular-Cellular Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran

Akram Tayanloo-Beik

Obesity and Eating Habits Research Center, Endocrinology and Metabolism Molecular-Cellular Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran

Moloud Payab

Department of Toxicology & Pharmacology, Faculty of Pharmacy, Toxicology and Poisoning Research Center, Tehran University of Medical Sciences, Tehran, Iran

Mahdi Gholami

Metabolomics and Genomics Research Center, Endocrinology and Metabolism Molecular-Cellular Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran

Motahareh Sheikh-Hosseini & Mehran Nematizadeh

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Babak Arjmand

Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran

Parisa Goodarzi

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Larijani, B., Tayanloo-Beik, A., Payab, M., Gholami, M., Sheikh-Hosseini, M., Nematizadeh, M. (2020). Design of Experimental Studies in Biomedical Sciences. In: Arjmand, B., Payab, M., Goodarzi, P. (eds) Biomedical Product Development: Bench to Bedside. Learning Materials in Biosciences. Springer, Cham. https://doi.org/10.1007/978-3-030-35626-2_4

Download citation

DOI : https://doi.org/10.1007/978-3-030-35626-2_4

Published : 06 February 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-35625-5

Online ISBN : 978-3-030-35626-2

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android . Learn more here!

  • Remote Access
  • Save figures into PowerPoint
  • Download tables as PDFs

Pharmacoepidemiology: Principles and Practice

Chapter 4. Experimental Study Designs

  • Download Chapter PDF

Disclaimer: These citations have been automatically generated based on the information we have and it may not be 100% accurate. Please consult the latest official manual style if you have any questions regarding the format accuracy.

Download citation file:

  • Search Book

Jump to a Section

  • Experimental Study Designs: Introduction
  • Experimental Design
  • Clinical Drug Trials
  • Randomized, Controlled Clinical Trials
  • Process of Performing a Clinical Trial
  • Sample Size Determination and Data Analysis
  • Clinical Trials and the U.S. Drug Approval Process
  • Patient Compliance in Clinical Drug Research
  • Study Questions
  • Full Chapter
  • Supplementary Content

Experimental study designs are the primary method for testing the effectiveness of new therapies and other interventions, including innovative drugs. By the 1930s, the pharmaceutical industry had adopted experimental methods and other research designs to develop and screen new compounds, improve production outputs, and test drugs for therapeutic benefits. The full potential of experimental methods in drug research was realized in the 1940s and 1950s with the growth in scientific knowledge and industrial technology. 1

In the 1960s, the controlled clinical trial, in which a group of patients receiving an experimental drug is compared with another group receiving a control drug or no treatment, became the standard for doing pharmaceutical research and measuring the therapeutic benefits of new drugs. 1 By the same time, the double-blind strategy of drug testing, in which both the patients and the researcher are unaware of which treatment is being taken by whom, had been adopted to limit the effect of external influences on the true pharmacological action of the drug. The drug regulations of the 1960s also reinforced the importance of controlled clinical trials by requiring that proof of effectiveness for new drugs be made through use of these research methods. 2,3

In pharmacoepidemiology, the primary use of experimental design is in performing clinical trials, most notably randomized, controlled clinical trials. 4 These studies involve people as the units of analysis. A variation on this experimental design is the community intervention study, in which groups of people, such as whole communities, are the unit of analysis. Key aspects of the clinical and community intervention trial designs are randomization, blinding, intention-to-treat analysis, and sample size determination.

An experiment is a study designed to compare benefits of an intervention with standard treatments, or no treatment, such as a new drug therapy or prevention program, or to show cause and effect (see Figure 3-2 ). This type of study is performed prospectively. Subjects are selected from a study population, assigned to the various study groups, and monitored over time to determine the outcomes that occur and are produced by the new drug therapy, treatment, or intervention.

Experimental designs have numerous advantages compared with other epidemiological methods. Randomization, when used, tends to balance confounding variables across the various study groups, especially variables that might be associated with changes in the disease state or the outcome of the intervention under study. Detailed information and data are collected at the beginning of an experimental study to develop a baseline; this same type of information also is collected at specified follow-up periods throughout the study. The investigators have control over variables such as the dose or degree of intervention. The blinding process reduces distortion in assessment. And, of great value, and not possible with other methods, is the testing of hypotheses. Most important, this design is the only real test of cause–effect relationships.

Get Free Access Through Your Institution

Pop-up div successfully displayed.

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.

Please Wait

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Methods

Research Methods – Types, Examples and Guide

Mixed Research methods

Mixed Methods Research – Types & Analysis

Quantitative Research

Quantitative Research – Methods, Types and...

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Case Study Research

Case Study – Methods, Examples and Guide

Questionnaire

Questionnaire – Definition, Types, and Examples

  • Medicine >
  • Divisions >
  • Behavioral Medicine >
  • Research & Facilities >
  • Clinical Research Consulting Lab >

Research Study Design

First things first: determining your research design.

Medical research studies have a number of possible designs. A strong research project closely ties the research questions/hypotheses to the methodology to be used, the variables to be measured or manipulated, and the planned analysis of collected data. CRCL personnel can help you with determining the proper research design and data analyses to adequately address your research your question. CRCL can also assist you in performing the proper statistical analysis on your collected data. Our staff are experts in research methodology and statistical analysis are proficient with multiple statistical methods statistical software packages. The type of research study you conduct determines what the proper data analyses are and what conclusions you can draw from your data.

Identify the type of research study you are planning on conducting:

Descriptive:.

A Descriptive analysis is provides basic information about a sample drawn from a population of interest. In a descriptive analysis, information is reported about the frequency of and/or percentages of the qualities of interest in the sample (for example the number of men or women with a disorder or the percentage of people for whom a type of cancer progresses into stage 4 after a given amount of time). For variables that measure a quantity measures of central tendency (mean, median, mode) and measures of dispersion (standard deviation, variance, range) may be calculated (for example the average age at which cardiac arrhythmia symptoms first appeared, the median amount of time until a cancer metastasizes, or the modal average person’s rating of chronic pain). Descriptive studies simply describe, they do not inform regarding the relationship between variables nor provide any information about how changes in one variable may cause changes in another.

Correlational:

A correlational study looks to examine a relationship between two or more variables and asks the question: are changes in one variable associated with changes in a second variable? The first variable is often referred to as the independent variable, the predictor variable, or the exogenous variable while the second variable is referred to as the dependent variable, the criterion variable, or the endogenous variable. Examples of correlational studies would be studies that examine the relationship between age and cholesterol level or between dose of Lisinopril and blood pressure. Common statistical methods used in this type of study are Pearson correlation, chi-square, and regression. It is important to remember that correlation does not imply causation, only the existence of a relationship. Why that relationship exists may be due to a causal mechanism but may also be due to other factors which influence both the variables.

Quasi-experimental:

Quasi-experimental studies examine the question of whether groups differ but the groups must be naturally occurring groups, not groups created by the researcher. The design is quasi-experimental because of the lack of random assignment of participants (patients, rats, bacterial cultures) into the groups of interest; the groups themselves are predetermined. For example the examination of the relationship between amount of vitamin D in the diet on chemotherapy outcomes for patients with prostate cancer versus colorectal cancer would be a quasi-experimental study because the experimenter has not controlled who has what type of cancer or their intake of Vitamin D. The comparison group may also be a naturally occurring control group, for example an experimenter may study the frequency of cardiac arrhythmias in and elderly population consisting of one group who regularly consume alcohol compared to one group of elderly patients who normally abstain from alcohol. Common statistical analyses in non-experimental studies like these are t-tests, Analysis of Variance (ANOVA), regression, multiple regression, and moderated multiple regression. While non-experimental studies cannot fully prove causation, they can point to the need for a more controlled experimental tests.

Experimental:

The most stringent test of a scientific hypothesis is an experimental study. The hallmark of an experimental design is that rather than simply measuring an independent variable or selecting a preexisting group that differs on that variable, the experimenter manipulates that variable to create experimental and control groups. Random assignment is used to create the groups and, if the sample is large enough, helps to equate the experimental and control groups on all variables except for the variables of interest. An example of an experimental design would be randomly assigning patients with congestive heart failure into one of three groups (two doses of a new beta-blocker or a placebo condition) and examining ejection fraction after three months to determine if heart function differs between the three groups. Common statistical analyses in experimental studies like these are also t-tests, Analysis of Variance (ANOVA), regression, multiple regression, and moderated multiple regression. The stronger the controls in an experimental design the more justified one is in concluding that the manipulation of the independent variable caused the changes in the dependent variable.

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experimental design in medical research

Enago Academy's Most Popular Articles

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

experimental design in medical research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experimental design in medical research

What would be most effective in reducing research misconduct?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Reflections on experimental research in medical education

Affiliation.

  • 1 Mayo Clinic College of Medicine, Rochester, Minnesota 55905, USA. [email protected]
  • PMID: 18427941
  • DOI: 10.1007/s10459-008-9117-3

As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.

PubMed Disclaimer

Similar articles

  • Medical errors education: A prospective study of a new educational tool. Paxton JH, Rubinfeld IS. Paxton JH, et al. Am J Med Qual. 2010 Mar-Apr;25(2):135-42. doi: 10.1177/1062860609353345. Epub 2010 Jan 28. Am J Med Qual. 2010. PMID: 20110456
  • Cognitive and learning sciences in biomedical and health instructional design: A review with lessons for biomedical informatics education. Patel VL, Yoskowitz NA, Arocha JF, Shortliffe EH. Patel VL, et al. J Biomed Inform. 2009 Feb;42(1):176-97. doi: 10.1016/j.jbi.2008.12.002. Epub 2008 Dec 24. J Biomed Inform. 2009. PMID: 19135173 Review.
  • Description, justification and clarification: a framework for classifying the purposes of research in medical education. Cook DA, Bordage G, Schmidt HG. Cook DA, et al. Med Educ. 2008 Feb;42(2):128-33. doi: 10.1111/j.1365-2923.2007.02974.x. Epub 2008 Jan 8. Med Educ. 2008. PMID: 18194162 Review.
  • Where are we with Web-based learning in medical education? Cook DA. Cook DA. Med Teach. 2006 Nov;28(7):594-8. doi: 10.1080/01421590601028854. Med Teach. 2006. PMID: 17594549 Review.
  • Experiential curriculum improves medical students' ability to answer clinical questions using the internet. Alper BS, Vinson DC. Alper BS, et al. Fam Med. 2005 Sep;37(8):565-9. Fam Med. 2005. PMID: 16145634
  • Effectiveness of using 2D atlas and 3D PDF as a teaching tool in anatomy lectures in initial learners: a randomized controlled trial in a medical school. Eroğlu FS, Erkan B, Koyuncu SB, Komşal ZR, Çiçek FE, Ülker M, Toklu ME, Atlan M, Kıyak YS, Kula S, Coşkun Ö, Budakoğlu Iİ. Eroğlu FS, et al. BMC Med Educ. 2023 Dec 15;23(1):962. doi: 10.1186/s12909-023-04960-4. BMC Med Educ. 2023. PMID: 38102632 Free PMC article. Clinical Trial.
  • Evaluation of an online modular eating disorders training (PreparED) to prepare healthcare trainees: a survey study. Glasofer DR, Lemly DC, Lloyd C, Jablonski M, Schaefer LM, Wonderlich SA, Attia E. Glasofer DR, et al. BMC Med Educ. 2023 Nov 16;23(1):868. doi: 10.1186/s12909-023-04866-1. BMC Med Educ. 2023. PMID: 37974188 Free PMC article.
  • The influence of Gamification on medical students' diagnostic decision making and awareness of medical cost: a mixed-method study. Ishizuka K, Shikino K, Kasai H, Hoshina Y, Miura S, Tsukamoto T, Yamauchi K, Ito S, Ikusaka M. Ishizuka K, et al. BMC Med Educ. 2023 Oct 28;23(1):813. doi: 10.1186/s12909-023-04808-x. BMC Med Educ. 2023. PMID: 37898743 Free PMC article.
  • Interprofessional education: a necessity in Alzheimer's dementia care-a pilot study. Dressel K, Ablinger I, Lauer AA, Grimm HS, Hartmann T, Hermanns C, Schwarz M, Taddey T, Grimm MOW. Dressel K, et al. Front Med (Lausanne). 2023 Sep 22;10:1235642. doi: 10.3389/fmed.2023.1235642. eCollection 2023. Front Med (Lausanne). 2023. PMID: 37809336 Free PMC article.
  • Addressing Training Gaps: A Competency-Based, Telehealth Training Initiative for Community Health Workers. Porterfield L, Warren V, Schick V, Gulliot-Wright S, Temple JR, Vaughan EM. Porterfield L, et al. Telemed Rep. 2023 Jun 16;4(1):126-134. doi: 10.1089/tmr.2023.0007. eCollection 2023. Telemed Rep. 2023. PMID: 37351464 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Full text sources, research materials.

  • NCI CPTC Antibody Characterization Program
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

The World Medical Association

WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects

experimental design in medical research

Adopted by the 18 th WMA General Assembly, Helsinki, Finland, June 1964 and amended by the: 29 th WMA General Assembly, Tokyo, Japan, October 1975 35 th WMA General Assembly, Venice, Italy, October 1983 41 st WMA General Assembly, Hong Kong, September 1989 48 th WMA General Assembly, Somerset West, Republic of South Africa, October 1996 52 nd WMA General Assembly, Edinburgh, Scotland, October 2000 53 rd WMA General Assembly, Washington DC, USA, October 2002 (Note of Clarification added) 55 th WMA General Assembly, Tokyo, Japan, October 2004 (Note of Clarification added) 59 th WMA General Assembly, Seoul, Republic of Korea, October 2008 64 th WMA General Assembly, Fortaleza, Brazil, October 2013

1.         The World Medical Association (WMA) has developed the Declaration of Helsinki as a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data.

The Declaration is intended to be read as a whole and each of its constituent paragraphs should be applied with consideration of all other relevant paragraphs.

2.         Consistent with the mandate of the WMA, the Declaration is addressed primarily to physicians. The WMA encourages others who are involved in medical research involving human subjects to adopt these principles.

General Principles

3.         The Declaration of Geneva of the WMA binds the physician with the words, “The health of my patient will be my first consideration,” and the International Code of Medical Ethics declares that, “A physician shall act in the patient’s best interest when providing medical care.”

4.         It is the duty of the physician to promote and safeguard the health, well-being and rights of patients, including those who are involved in medical research. The physician’s knowledge and conscience are dedicated to the fulfilment of this duty.

5.         Medical progress is based on research that ultimately must include studies involving human subjects.

6.         The primary purpose of medical research involving human subjects is to understand the causes, development and effects of diseases and improve preventive, diagnostic and therapeutic interventions (methods, procedures and treatments). Even the best proven interventions must be evaluated continually through research for their safety, effectiveness, efficiency, accessibility and quality.

7.         Medical research is subject to ethical standards that promote and ensure respect for all human subjects and protect their health and rights.

8.         While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.

9.         It is the duty of physicians who are involved in medical research to protect the life, health, dignity, integrity, right to self-determination, privacy, and confidentiality of personal information of research subjects. The responsibility for the protection of research subjects must always rest with the physician or other health care professionals and never with the research subjects, even though they have given consent.

10.       Physicians must consider the ethical, legal and regulatory norms and standards for research involving human subjects in their own countries as well as applicable international norms and standards. No national or international ethical, legal or regulatory requirement should reduce or eliminate any of the protections for research subjects set forth in this Declaration.

11.       Medical research should be conducted in a manner that minimises possible harm to the environment.

12.       Medical research involving human subjects must be conducted only by individuals with the appropriate ethics and scientific education, training and qualifications. Research on patients or healthy volunteers requires the supervision of a competent and appropriately qualified physician or other health care professional.

13.       Groups that are underrepresented in medical research should be provided appropriate access to participation in research.

14.       Physicians who combine medical research with medical care should involve their patients in research only to the extent that this is justified by its potential preventive, diagnostic or therapeutic value and if the physician has good reason to believe that participation in the research study will not adversely affect the health of the patients who serve as research subjects.

15.       Appropriate compensation and treatment for subjects who are harmed as a result of participating in research must be ensured.

Risks, Burdens and Benefits

16.       In medical practice and in medical research, most interventions involve risks and burdens.

Medical research involving human subjects may only be conducted if the importance of the objective outweighs the risks and burdens to the research subjects.

17.       All medical research involving human subjects must be preceded by careful assessment of predictable risks and burdens to the individuals and groups involved in the research in comparison with foreseeable benefits to them and to other individuals or groups affected by the condition under investigation.

Measures to minimise the risks must be implemented. The risks must be continuously monitored, assessed and documented by the researcher.

18.       Physicians may not be involved in a research study involving human subjects unless they are confident that the risks have been adequately assessed and can be satisfactorily managed.

When the risks are found to outweigh the potential benefits or when there is conclusive proof of definitive outcomes, physicians must assess whether to continue, modify or immediately stop the study.

Vulnerable Groups and Individuals

19.       Some groups and individuals are particularly vulnerable and may have an increased likelihood of being wronged or of incurring additional harm.

All vulnerable groups and individuals should receive specifically considered protection.

20.       Medical research with a vulnerable group is only justified if the research is responsive to the health needs or priorities of this group and the research cannot be carried out in a non-vulnerable group. In addition, this group should stand to benefit from the knowledge, practices or interventions that result from the research.

Scientific Requirements and Research Protocols

21.       Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected.

22.       The design and performance of each research study involving human subjects must be clearly described and justified in a research protocol.

The protocol should contain a statement of the ethical considerations involved and should indicate how the principles in this Declaration have been addressed. The protocol should include information regarding funding, sponsors, institutional affiliations, potential conflicts of interest, incentives for subjects and information regarding provisions for treating and/or compensating subjects who are harmed as a consequence of participation in the research study.

In clinical trials, the protocol must also describe appropriate arrangements for post-trial provisions.

Research Ethics Committees

23.       The research protocol must be submitted for consideration, comment, guidance and approval to the concerned research ethics committee before the study begins. This committee must be transparent in its functioning, must be independent of the researcher, the sponsor and any other undue influence and must be duly qualified. It must take into consideration the laws and regulations of the country or countries in which the research is to be performed as well as applicable international norms and standards but these must not be allowed to reduce or eliminate any of the protections for research subjects set forth in this Declaration.

The committee must have the right to monitor ongoing studies. The researcher must provide monitoring information to the committee, especially information about any serious adverse events. No amendment to the protocol may be made without consideration and approval by the committee. After the end of the study, the researchers must submit a final report to the committee containing a summary of the study’s findings and conclusions.

Privacy and C onfidentiality

24.       Every precaution must be taken to protect the privacy of research subjects and the confidentiality of their personal information.

Informed Consent

25.       Participation by individuals capable of giving informed consent as subjects in medical research must be voluntary. Although it may be appropriate to consult family members or community leaders, no individual capable of giving informed consent may be enrolled in a research study unless he or she freely agrees.

26.       In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

After ensuring that the potential subject has understood the information, the physician or another appropriately qualified individual must then seek the potential subject’s freely-given informed consent, preferably in writing. If the consent cannot be expressed in writing, the non-written consent must be formally documented and witnessed.

All medical research subjects should be given the option of being informed about the general outcome and results of the study.

27.       When seeking informed consent for participation in a research study the physician must be particularly cautious if the potential subject is in a dependent relationship with the physician or may consent under duress. In such situations the informed consent must be sought by an appropriately qualified individual who is completely independent of this relationship.

28.       For a potential research subject who is incapable of giving informed consent, the physician must seek informed consent from the legally authorised representative. These individuals must not be included in a research study that has no likelihood of benefit for them unless it is intended to promote the health of the group represented by the potential subject, the research cannot instead be performed with persons capable of providing informed consent, and the research entails only minimal risk and minimal burden.

29.       When a potential research subject who is deemed incapable of giving informed consent is able to give assent to decisions about participation in research, the physician must seek that assent in addition to the consent of the legally authorised representative. The potential subject’s dissent should be respected.

30.       Research involving subjects who are physically or mentally incapable of giving consent, for example, unconscious patients, may be done only if the physical or mental condition that prevents giving informed consent is a necessary characteristic of the research  group. In such circumstances the physician must seek informed consent from the legally authorised representative. If no such representative is available and if the research cannot be delayed, the study may proceed without informed consent provided that the specific reasons for involving subjects with a condition that renders them unable to give informed consent have been stated in the research protocol and the study has been approved by a research ethics committee. Consent to remain in the research must be obtained as soon as possible from the subject or a legally authorised representative.

31.       The physician must fully inform the patient which aspects of their care are related to the research. The refusal of a patient to participate in a study or the patient’s decision to withdraw from the study must never adversely affect the patient-physician relationship.

32.       For medical research using identifiable human material or data, such as research on material or data contained in biobanks or similar repositories, physicians must seek informed consent for its collection, storage and/or reuse. There may be exceptional situations where consent would be impossible or impracticable to obtain for such research. In such situations the research may be done only after consideration and approval of a research ethics committee.

Use of Placebo

33.       The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best proven intervention(s), except in the following circumstances:

Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or

Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention

and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention.

Extreme care must be taken to avoid abuse of this option.

Post-Trial Provisions

34.       In advance of a clinical trial, sponsors, researchers and host country governments should make provisions for post-trial access for all participants who still need an intervention identified as beneficial in the trial. This information must also be disclosed to participants during the informed consent process.

Research Registration and Publication and Dissemination of Results

35.       Every research study involving human subjects must be registered in a publicly accessible database before recruitment of the first subject.

36.       Researchers, authors, sponsors, editors and publishers all have ethical obligations with regard to the publication and dissemination of the results of research. Researchers have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. All parties should adhere to accepted guidelines for ethical reporting. Negative and inconclusive as well as positive results must be published or otherwise made publicly available. Sources of funding, institutional affiliations and conflicts of interest must be declared in the publication. Reports of research not in accordance with the principles of this Declaration should not be accepted for publication.

Unproven Interventions in Clinical Practice

37.       In the treatment of an individual patient, where proven interventions do not exist or other known interventions have been ineffective, the physician, after seeking expert advice, with informed consent from the patient or a legally authorised representative, may use an unproven intervention if in the physician’s judgement it offers hope of saving life, re-establishing health or alleviating suffering. This intervention should subsequently be made the object of research, designed to evaluate its safety and efficacy. In all cases, new information must be recorded and, where appropriate, made publicly available.

Policy Types

Archived versions.

  • » DoH-Jun1964
  • » DoH-Oct1975
  • » DoH-Oct1983
  • » DoH-Sept1989
  • » DoH-Oct1996
  • » DoH-Oct2000
  • » DoH-Oct2004
  • » DoH-Oct2008

Related WMA Policies

Wma declaration of venice on end of life medical care, wma international code of medical ethics.

Science and the scientific method: Definitions and examples

Here's a look at the foundation of doing science — the scientific method.

Kids follow the scientific method to carry out an experiment.

The scientific method

Hypothesis, theory and law, a brief history of science, additional resources, bibliography.

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

The word "science" is derived from the Latin word "scientia," which means knowledge based on demonstrable and reproducible data, according to the Merriam-Webster dictionary . True to this definition, science aims for measurable results through testing and analysis, a process known as the scientific method. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it focuses only on the natural world, according to the University of California, Berkeley . Anything that is considered supernatural, or beyond physical reality, does not fit into the definition of science.

When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that is designed to support or contradict a scientific theory .

"As a field biologist, my favorite part of the scientific method is being in the field collecting the data," Jaime Tanner, a professor of biology at Marlboro College, told Live Science. "But what really makes that fun is knowing that you are trying to answer an interesting question. So the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not."

Here's an illustration showing the steps in the scientific method.

The steps of the scientific method go something like this, according to Highline College :

  • Make an observation or observations.
  • Form a hypothesis — a tentative description of what's been observed, and make predictions based on that hypothesis.
  • Test the hypothesis and predictions in an experiment that can be reproduced.
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary.
  • Reproduce the experiment until there are no discrepancies between observations and theory. "Replication of methods and results is my favorite step in the scientific method," Moshe Pritsker, a former post-doctoral researcher at Harvard Medical School and CEO of JoVE, told Live Science. "The reproducibility of published experiments is the foundation of science. No reproducibility — no science."

Some key underpinnings to the scientific method:

  • The hypothesis must be testable and falsifiable, according to North Carolina State University . Falsifiable means that there must be a possible negative answer to the hypothesis.
  • Research must involve deductive reasoning and inductive reasoning . Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning uses observations to infer an explanation for those observations.
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change), according to the University of California, Santa Barbara .
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against, according to Britannica .

The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory. While a theory provides an explanation for a phenomenon, a scientific law provides a description of a phenomenon, according to The University of Waikato . One example would be the law of conservation of energy, which is the first law of thermodynamics that says that energy can neither be created nor destroyed. 

A law describes an observed phenomenon, but it doesn't explain why the phenomenon exists or what causes it. "In science, laws are a starting place," said Peter Coppinger, an associate professor of biology and biomedical engineering at the Rose-Hulman Institute of Technology. "From there, scientists can then ask the questions, 'Why and how?'"

Laws are generally considered to be without exception, though some laws have been modified over time after further testing found discrepancies. For instance, Newton's laws of motion describe everything we've observed in the macroscopic world, but they break down at the subatomic level.

This does not mean theories are not meaningful. For a hypothesis to become a theory, scientists must conduct rigorous testing, typically across multiple disciplines by separate groups of scientists. Saying something is "just a theory" confuses the scientific definition of "theory" with the layperson's definition. To most people a theory is a hunch. In science, a theory is the framework for observations and facts, Tanner told Live Science.

This Copernican heliocentric solar system, from 1708, shows the orbit of the moon around the Earth, and the orbits of the Earth and planets round the sun, including Jupiter and its moons, all surrounded by the 12 signs of the zodiac.

The earliest evidence of science can be found as far back as records exist. Early tablets contain numerals and information about the solar system , which were derived by using careful observation, prediction and testing of those predictions. Science became decidedly more "scientific" over time, however.

1200s: Robert Grosseteste developed the framework for the proper methods of modern scientific experimentation, according to the Stanford Encyclopedia of Philosophy. His works included the principle that an inquiry must be based on measurable evidence that is confirmed through testing.

1400s: Leonardo da Vinci began his notebooks in pursuit of evidence that the human body is microcosmic. The artist, scientist and mathematician also gathered information about optics and hydrodynamics.

1500s: Nicolaus Copernicus advanced the understanding of the solar system with his discovery of heliocentrism. This is a model in which Earth and the other planets revolve around the sun, which is the center of the solar system.

1600s: Johannes Kepler built upon those observations with his laws of planetary motion. Galileo Galilei improved on a new invention, the telescope, and used it to study the sun and planets. The 1600s also saw advancements in the study of physics as Isaac Newton developed his laws of motion.

1700s: Benjamin Franklin discovered that lightning is electrical. He also contributed to the study of oceanography and meteorology. The understanding of chemistry also evolved during this century as Antoine Lavoisier, dubbed the father of modern chemistry , developed the law of conservation of mass.

1800s: Milestones included Alessandro Volta's discoveries regarding electrochemical series, which led to the invention of the battery. John Dalton also introduced atomic theory, which stated that all matter is composed of atoms that combine to form molecules. The basis of modern study of genetics advanced as Gregor Mendel unveiled his laws of inheritance. Later in the century, Wilhelm Conrad Röntgen discovered X-rays , while George Ohm's law provided the basis for understanding how to harness electrical charges.

1900s: The discoveries of Albert Einstein , who is best known for his theory of relativity, dominated the beginning of the 20th century. Einstein's theory of relativity is actually two separate theories. His special theory of relativity, which he outlined in a 1905 paper, " The Electrodynamics of Moving Bodies ," concluded that time must change according to the speed of a moving object relative to the frame of reference of an observer. His second theory of general relativity, which he published as " The Foundation of the General Theory of Relativity ," advanced the idea that matter causes space to curve.

In 1952, Jonas Salk developed the polio vaccine , which reduced the incidence of polio in the United States by nearly 90%, according to Britannica . The following year, James D. Watson and Francis Crick discovered the structure of DNA , which is a double helix formed by base pairs attached to a sugar-phosphate backbone, according to the National Human Genome Research Institute .

2000s: The 21st century saw the first draft of the human genome completed, leading to a greater understanding of DNA. This advanced the study of genetics, its role in human biology and its use as a predictor of diseases and other disorders, according to the National Human Genome Research Institute .

  • This video from City University of New York delves into the basics of what defines science.
  • Learn about what makes science science in this book excerpt from Washington State University .
  • This resource from the University of Michigan — Flint explains how to design your own scientific study.

Merriam-Webster Dictionary, Scientia. 2022. https://www.merriam-webster.com/dictionary/scientia

University of California, Berkeley, "Understanding Science: An Overview." 2022. ​​ https://undsci.berkeley.edu/article/0_0_0/intro_01  

Highline College, "Scientific method." July 12, 2015. https://people.highline.edu/iglozman/classes/astronotes/scimeth.htm  

North Carolina State University, "Science Scripts." https://projects.ncsu.edu/project/bio183de/Black/science/science_scripts.html  

University of California, Santa Barbara. "What is an Independent variable?" October 31,2017. http://scienceline.ucsb.edu/getkey.php?key=6045  

Encyclopedia Britannica, "Control group." May 14, 2020. https://www.britannica.com/science/control-group  

The University of Waikato, "Scientific Hypothesis, Theories and Laws." https://sci.waikato.ac.nz/evolution/Theories.shtml  

Stanford Encyclopedia of Philosophy, Robert Grosseteste. May 3, 2019. https://plato.stanford.edu/entries/grosseteste/  

Encyclopedia Britannica, "Jonas Salk." October 21, 2021. https://www.britannica.com/ biography /Jonas-Salk

National Human Genome Research Institute, "​Phosphate Backbone." https://www.genome.gov/genetics-glossary/Phosphate-Backbone  

National Human Genome Research Institute, "What is the Human Genome Project?" https://www.genome.gov/human-genome-project/What  

‌ Live Science contributor Ashley Hamer updated this article on Jan. 16, 2022.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Y chromosome is evolving faster than the X, primate study reveals

Earth from space: Trio of ringed ice caps look otherworldly on Russian Arctic islands

Strawberry Moon 2024: See summer's first full moon rise a day after solstice

Most Popular

  • 2 How long would it take to reach Planet 9, if we ever find it?
  • 3 NASA will put a 'new star' in the sky by the end of the decade in 1st-of-its-kind mission
  • 4 Florida shark attacks caused by heat, not scary orcas, experts say
  • 5 NASA engineers finally fix Voyager 1 spacecraft — from 15 billion miles away
  • 2 Scientists inserted a window in a man's skull to read his brain with ultrasound
  • 3 Gilgamesh flood tablet: A 2,600-year-old text that's eerily similar to the story of Noah's Ark
  • 4 Y chromosome is evolving faster than the X, primate study reveals
  • 5 The sun's magnetic field is about to flip. Here's what to expect.

experimental design in medical research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 June 2024

Assessing rates and predictors of cannabis-associated psychotic symptoms across observational, experimental and medical research

  • Tabea Schoeler   ORCID: orcid.org/0000-0003-4846-2741 1 , 2 ,
  • Jessie R. Baldwin 2 , 3 ,
  • Ellen Martin 2 ,
  • Wikus Barkhuizen 2 &
  • Jean-Baptiste Pingault   ORCID: orcid.org/0000-0003-2557-4716 2 , 3  

Nature Mental Health ( 2024 ) Cite this article

2295 Accesses

84 Altmetric

Metrics details

  • Outcomes research
  • Risk factors

Cannabis, one of the most widely used psychoactive substances worldwide, can give rise to acute cannabis-associated psychotic symptoms (CAPS). While distinct study designs have been used to examine CAPS, an overarching synthesis of the existing findings has not yet been carried forward. To that end, we quantitatively pooled the evidence on rates and predictors of CAPS ( k  = 162 studies, n  = 210,283 cannabis-exposed individuals) as studied in (1) observational research, (2) experimental tetrahydrocannabinol (THC) studies, and (3) medicinal cannabis research. We found that rates of CAPS varied substantially across the study designs, given the high rates reported by observational and experimental research (19% and 21%, respectively) but not medicinal cannabis studies (2%). CAPS was predicted by THC administration (for example, single dose, Cohen’s d  = 0.7), mental health liabilities (for example, bipolar disorder, d  = 0.8), dopamine activity ( d  = 0.4), younger age ( d  = −0.2), and female gender ( d  = −0.09). Neither candidate genes (for example, COMT , AKT1 ) nor other demographic variables (for example, education) predicted CAPS in meta-analytical models. The results reinforce the need to more closely monitor adverse cannabis-related outcomes in vulnerable individuals as these individuals may benefit most from harm-reduction efforts.

Similar content being viewed by others

experimental design in medical research

Do AKT1, COMT and FAAH influence reports of acute cannabis intoxication experiences in patients with first episode psychosis, controls and young adult cannabis users?

experimental design in medical research

Rates and correlates of cannabis-associated psychotic symptoms in over 230,000 people who use cannabis

experimental design in medical research

Measuring the diversity gap of cannabis clinical trial participants compared to people who report using cannabis

Cannabis, one of the most widely used psychoactive substances in the world, 1 is commonly used as a recreational substance and is increasingly taken for medicinal purposes. 2 , 3 As a recreational substance, cannabis use is particularly prevalent among young people 1 who seek its rewarding acute effects such as relaxation, euphoria, or sociability. 4 When used as a medicinal product, cannabis is typically prescribed to alleviate clinical symptoms in individuals with pre-existing health conditions (for example, epilepsy, multiple sclerosis, chronic pain, nausea. 5 )

Given the widespread use of cannabis, alongside the shifts toward legalization of cannabis for medicinal and recreational purposes, momentum is growing to scrutinize both the potential therapeutic and adverse effects of cannabis on health. From a public health perspective, of particular concern are the increasing rates of cannabis-associated emergency department presentations, 6 the rising levels of THC (tetrahydrocannabinol, the main psychoactive ingredient in cannabis) in street cannabis, 7 the adverse events associated with medicinal cannabis use, 8 and the long-term health hazards associated with cannabis use. 9 In this context, risk of psychosis as a major adverse health outcome related to cannabis use has been studied extensively, suggesting that early-onset and heavy cannabis use constitutes a contributory cause of psychosis. 10 , 11 , 12

More recent research has started to examine the more acute cannabis-associated psychotic symptoms (CAPS) to understand better how individual vulnerabilities and the pharmacological properties of cannabis elicit adverse reactions in individuals exposed to cannabis. Indeed, transient psychosis-like symptoms, including hallucinations or paranoia during cannabis intoxication, are well documented. 5 , 13 , 14 In more rare cases, recreational cannabis users experience severe forms of CAPS, 15 requiring emergency medical treatment as a result of acute CAPS. 16 In addition, acute psychosis following THC administration has been documented in medicinal cannabis trials and experimental studies, 17 , 18 , 19 suggesting that CAPS can also occur in more-controlled environments.

While numerous studies have provided evidence on CAPS in humans, no research has yet synthesized and compared the findings obtained from different study designs and populations. More specifically, three distinct study types have focused on CAPS: (1) observational studies assessing the subjective experiences of cannabis intoxication in recreational cannabis users, (2) experimental challenge studies administering THC in healthy volunteers, and (3) medicinal cannabis studies documenting adverse events when testing medicinal cannabis products in individuals with pre-existing health conditions. As such, the availability of these three distinct lines of evidence provides a unique research opportunity as their findings can be synthesized, be inspected for convergence, and ultimately, contribute to more evidence-based harm-reduction initiatives.

In this work, we therefore aim to perform a quantitative synthesis of all existing evidence examining CAPS to advance our understanding concerning the rates and predictors of CAPS: First, it is currently unknown how common CAPS are among individuals exposed to cannabis. While rates of CAPS are reported by numerous studies, estimates vary substantially (for example, from <1% (ref. 20 ) to 70% (ref. 21 )) and may differ depending on the assessed symptom profile (for example, cannabis-associated hallucinations versus cannabis-associated paranoia), the study design (for example, observational versus experimental research), and the population (for example, healthy volunteers versus medicinal cannabis users). Second, distinct study designs have scrutinized similar questions concerning the risks involved in CAPS. As such, comparisons of the results from one study design (for example, observational studies, assessing self-reported cannabis use in recreational users 22 , 23 ) with another study design (for example, experimental studies administering varying doses of THC 24 , 25 ) can be used to triangulate findings on a given risk factor of interest (for example, potency of cannabis). Finally, studies focusing on predictors of CAPS typically assess hypothesized risk factors in isolation. Pooling all existing evidence across different risk factors therefore provides a more complete picture of the relative magnitude of the individual risk factors involved in CAPS.

In summary, this work is set out to synthesize all of the available evidence on CAPS across three lines of research. In light of the increasingly liberal cannabis policies around the world, alongside the rising levels of THC in cannabis, such efforts are key to informing harm-reduction strategies and future research avenues for public health. Considering that individuals presenting with acute cannabis-induced psychosis are at high risk of converting to a psychotic disorder (for example, rates ranging between 18% (ref. 26 ) and 45% (ref. 27 )), a deeper understanding of factors predicting CAPS would contribute to our understanding concerning risk of long-term psychosis in the context of cannabis use.

Of 20,428 published studies identified by the systematic search, 162 were included in this work. The reasons for exclusion are detailed in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram (Fig. 1 ; see Supplementary Fig. 1 for a breakdown of the number of independent participants included in the different analytical models). The PRISMA reporting checklist is included in the Supplementary Results . At the full-text screening stage, the majority of studies were excluded because they did not report data on CAPS (83.88% of all excluded studies). Figure 2 displays the number of published studies included ( k ) and the number of (non-overlapping) study participants ( n ) per study design, highlighting that out of all participants included in this meta-analysis ( n  = 201,283), most took part in observational research ( n  = 174,300; 82.89%), followed by studies assessing medicinal cannabis products ( n  = 33,502; 15.93%), experimental studies administering THC ( n  = 2,009; 0.96%), and quasi-experimental studies ( n  = 472; 0.22%). Screening of 10% of the studies at the full-text stage by an independent researcher (E.M.) did not identify missed studies.

figure 1

Flow chart as adapted from the PRISMA flow chart ( http://www.prisma-statement.org/ ). Independent study participants are defined as the maximum number of participants available for an underlying study sample assessed in one or more of the included studies.

figure 2

Number of included studies per year of publication and study design, including observational research assessing recreational cannabis users, experimental studies administering THC in healthy volunteers, and medicinal studies assessing adverse events in individuals taking cannabis products for medicinal use. Quasi-experimental research involved research testing the effects of THC administration in a naturalistic setting. 23 , 62 k , number of studies; n , number of (non-overlapping) study participants.

Rates of CAPS across the three study designs

A total of 99 studies published between 1971 and 2023 reported data on rates of CAPS and were included in the analysis, comprising 126,430 individuals from independent samples. Convergence of the data extracted by the two researchers (T.S. and W.B.) was high for the pooled rates on CAPS from observational studies (rate DIFF  = −0.01%, where rate DIFF  = rate TS  – rate WB ), experimental studies (rate DIFF  = 0%), and medicinal cannabis studies (rate DIFF  = 0%). More specifically, we included data from 41 observational studies ( n  = 92,888 cannabis users), 19 experimental studies administering THC ( n  = 754), and 79 studies assessing efficacy and tolerability of medicinal cannabis products containing THC ( n  = 32,821). In medicinal trials, the most common conditions treated with THC were pain ( k  = 19 (23.75%)) and cancer ( k  = 16 (20%)) (see Supplementary Table 1 for an overview). The age distribution of the included participants was similar in observational studies (mean age = 24.47 years, ranging from 16.6 to 34.34 years) and experimental studies (mean age = 25.1 years, ranging from 22.47 to 27.3 years). Individuals taking part in medicinal trials were substantially older (mean age = 48.16 years, ranging from 8 to 74.5 years).

As summarized in Fig. 3 and Supplementary Table 3 , substantial rates of CAPS were reported by observational studies (19.4%, 95% confidence interval (CI): 14.2%, 24.6%) and THC-challenge studies (21%, 95% CI: 11.3%, 30.7%), but not medicinal cannabis studies (1.5%, 95% CI: 1.1%, 1.9%). The pooled rates estimated for different symptom profiles of CAPS (CAPS – paranoia, CAPS – hallucinations, CAPS – delusions) are displayed in Supplementary Fig. 2 . All individual study estimates are listed in Supplementary Table 2 .

figure 3

Pooled rates of CAPS across the three different study designs. Estimates on the y axis are the rates (in %, 95% confidence interval) obtained from models pooling together estimates on rates of CAPS (including psychosis-like symptoms, paranoia, hallucinations, and delusions) per study design.

Most models showed significant levels of heterogeneity (Supplementary Table 3 ), highlighting that rates of CAPS differed as a function of study-specific features. Risk of publication bias was indicated ( P Peters  < 0.05) for one of the meta-analytical models combining all rates of CAPS (see funnel plots, Supplementary Fig. 2 ). Applying the trim-and-fill method slightly reduced the pooled rate of CAPS obtained from medicinal cannabis studies (rate unadjusted  = 1.53%; rate adjusted  = 1.18%). Finally, Fig. 4 summarizes rates of CAPS of a subset of studies where CAPS was defined as the occurrence of a full-blown cannabis-associated psychotic episode (as described in Table 1 ). When combined, the rate of CAPS (full episode) was 0.52% (0.42–0.62%) across the three study designs, highlighting that around one in 200 individuals experienced a severe episode of psychosis when exposed to cannabis/THC. Rates of CAPS (full episode) as reported by the individual studies showed high levels of consistency ( I 2  = 8%, P(I 2 ) = 0.45; Fig. 4 ).

figure 4

Studies reporting rates of cannabis-associated psychosis (full episode). Depicted in violet are the individual study estimates (in %, 95% confidence interval) of studies reporting rates of (full-blown) cannabis-associated psychotic episodes. Included are studies using medicinal cannabis, observational, or experimental samples. The pooled meta-analyzed estimate is colored in blue. The I 2 statistic (scale of 0 to 100) indexes the level of heterogeneity across the estimates included in the meta-analysis.

Predictors of cannabis-associated psychotic symptoms

Assessing predictors of CAPS, we included 103 studies published between 1976 and 2023, corresponding to 80 independent samples ( n  = 170,158 non-overlapping individuals). In total, we extracted 381 Cohen’s d that were pooled in 44 separate meta-analytical models. A summary of all extracted study estimates is provided in Supplementary Table 4 . Comparing the P values of the individual Cohen’s d to the original P values as reported in the studies revealed a high level of concordance ( r  = 0.96 P  = 1.1 × 10 –79 ), indicating that the conversion of the raw study estimates to a common metric did not result in a substantial loss of information. Comparing the results obtained from the data extracted by two researchers (T.S. and W.B.) identified virtually no inconsistencies when inspecting estimates of Cohen’s d , as obtained for severity of cannabis use on CAPS ( d DIFF  = 0, where d DIFF  =  d TS   –d   WB ), gender ( d DIFF  = 0), administration of (placebo controlled) medicinal cannabis ( d DIFF  = 0.003), psychosis liability ( d DIFF  = 0), and administration of a single dose of THC ( d DIFF  = 0).

Figure 5 summarizes the results obtained from the meta-analytical models. We examined whether CAPS was predicted by the pharmacodynamic properties of cannabis, a person’s cannabis use history, demographic factors, mental health/personality traits, neurotransmitters, genetics, and use of other drugs: With respect to the pharmacodynamic properties of cannabis, the largest effect on CAPS severity was present for a single dose of THC ( d  = 0.7, 95% CI: 0.52, 0.87) as administered in experimental studies, followed by a significant dose–response effect of THC on CAPS ( d  = 0.42, 95% CI: 0.25, 0.59, that is, tested as moderation effects of THC dose in experimental studies). When tested in medicinal randomized controlled trials, cannabis products significantly increased symptoms of CAPS ( d  = 0.14, 95% CI: 0.05, 0.23), albeit by a smaller magnitude. Protective effects were present for low THC/COOH levels ( d  = −0.22, 95% CI: −0.39, −0.05, that is, the inactive metabolite of cannabis), but not for the THC/CBD (cannabidiol) ratio ( d  = −0.19, 95% CI: −0.43, 0.05, P  = 0.13).

figure 5

Summary of pooled Cohen’s d , the corresponding 95% confidence intervals, and P values (two-sided, uncorrected for multiple testing). Positive estimates of Cohen’s d indicate increases in CAPS in response to the assessed predictor. Details regarding the classification and interpretation of each predictor are provided in the Supplementary Information . The reference list of all studies included in this figure is provided in Supplementary Table 4 . NS, neurotransmission.

Less clear were the findings with respect to the cannabis use history of the participants and its effect on CAPS. Here, neither young age of onset of cannabis use nor high-frequency use of cannabis or the preferred type of cannabis (strains high in THC, strains high in CBD) was associated with CAPS. The only demographic factors that significantly predicted CAPS were age ( d  = −0.17, 95% CI: −0.292, −0.050) and gender (−0.09, 95% CI: −0.180, −0.001), indicating that younger and female cannabis users report higher levels of CAPS compared with older and male users. With respect to mental health and personality, the strongest predictors for CAPS were diagnosis of bipolar disorder ( d  = 0.8, 95% CI: 0.54, 1.06)) and psychosis liability ( d  = 0.49, 95% CI: 0.21, 0.77), followed by mood problems (anxiety d  = 0.44, 95% CI: 0.03, 0.84; depression d  = 0.37, 95% CI: 0.003, 0.740) and addiction liability ( d  = 0.26, 95% CI: 0.14, 0.38). Summarizing the evidence from studies looking at neurotransmitter functioning showed that increased dopamine activity significantly predicted CAPS ( d  = 0.4, 95% CI: 0.16, 0.64) (for example, reduced CAPS following administration of D2 blockers such as olanzapine 28 or haloperidol 29 ). By contrast, alterations in the opioid system did not reduce risk of CAPS. Similarly, none of the assessed candidate genes showed evidence of altering response to cannabis. Finally, out of 11 psychoactive substances with available data, only use histories of MDMA (3,4-methyl enedioxy methamphetamine) ( d  = 0.2, 95% CI: 0.03, 0.36), crack ( d  = 0.13, 95% CI: 0.03, 0.23), inhalants ( d  = 0.12, 95% CI: 0.03, 0.22), and sedatives ( d  = 0.12, 95% CI: 0.02, 0.22) linked to increases in CAPS.

Most of the meta-analytical models showed considerable levels of heterogeneity ( I 2  > 80%; Supplementary Table 5 ), notably when summarizing findings from observational studies (for example, severity of cannabis use: I 2  = 98%, age of onset of cannabis use: I 2  = 98%), highlighting that the individual effect estimates varied substantially across studies. By contrast, lower levels of heterogeneity were present when pooling evidence from experimental and medicinal cannabis studies (for example, effects of medicinal cannabis: I 2  = 18%; THC dose–response effects: I 2  = 37%). While risk of publication bias was indicated for four of the meta-analytical models (Egger’s test P  < 0.05) (Supplementary Fig. 3 ), an inspection of trim-and-fill adjusted estimates did not alter the conclusions for (1) administration of a single dose of THC ( P Egger  < 0.0001, d unadjusted  = 0.7, d trim-and-fill  = 0.49), (2) CBD administration ( P Egger  = 0.0001, d unadjusted  = −0.19, d trim-and-fill  = −0.14, both P  < 0.05), psychosis liability ( P Egger  = 0.025, d unadjusted  = 0.49, d trim-and-fill  = 0.49), and (3) diagnosis of depression ( P Egger  = 0.019, d unadjusted  = 0.37, d trim-and-fill  = 0.54). Outliers were identified for seven meta-analytical models (Supplementary Fig. 4 ). Removing outliers from the models did not substantially alter the conclusions drawn from the models, as indicated for age ( d  = −0.18, d corr  = −0.14, both P  < 0.05); anxiety ( d  = 0.61, d corr  = 0.47, both P  < 0.05), severity of cannabis use ( d  = 0.19, d corr  = 0.25, both P  > 0.05), depression ( d  = 0.41, d corr  = 0.25, both P  > 0.05), gender ( d  = −0.09, d corr  = −0.12, both P  < 0.05), psychosis liability ( d  = 0.49, d corr  = 0.43, both P  < 0.05), and administration of a single dose of THC ( d  = 0.6, d corr  = 0.56, both P  < 0.05). Sensitivity checks assessing whether Cohen’s d changes as a function of within-subject correlation coefficient highlighted that the results were highly concordant (Supplementary Fig. 6 ). Minor deviations from the main analysis were present for the effects of a single dose of THC ( d r =0.3  = 0.64 versus d r =0.5  = 0.69 versus d r =0.7  = 0.77) and dose–response effects of THC ( d r =0.3  = 0.45 versus d r =0.5  = 0.42 versus d r =0.7  = 0.39), but this did not alter the interpretation of the findings.

Finally, we assessed consistency of findings for predictors examined in more than one of the different study designs (observational, experimental, and medicinal cannabis studies), as illustrated for four meta-analytical models in Fig. 6 (see Supplementary Fig. 7 for the complete set of results). Triangulating the results highlighted that consistency with respect to the direction of effects was particularly high for age ( d Experiments  = −0.14 versus d Observational  = −0.19 versus d Quasi-Experimental  = −0.16) and gender ( d Experiments  = −0.09 versus d Observational  = −0.07 versus d Quasi-Experimental  = −0.25) on CAPS. By contrast, little consistency across the different study designs was present with respect to cannabis use histories, notable age of onset of cannabis use ( d Observational  = −0.3 versus d Quasi-Experimental  = 0.24), and use of high-THC cannabis ( d Observational  = 0.12 versus d Quasi-Experimental  = −0.13).

figure 6

Pooled estimates of Cohen’s d when estimated separately for each of the different study designs. The I 2 statistic (scale of 0 to 100) indexes the level of heterogeneity across the estimates included in the meta-analysis.

In this work, we examined rates and predictors of acute CAPS by synthesizing evidence from three distinct study designs: observational research, experimental studies administering THC, and studies testing medicinal cannabis products. Our results led to a number of key findings regarding the risk of CAPS in individuals exposed to cannabis. First, significant rates of CAPS were reported by all three study designs. This indicates that risk of acute psychosis-like symptoms exists after exposure to cannabis, irrespective of whether it is used recreationally, administered in controlled experiments, or prescribed as a medicinal product. Second, rates of CAPS vary across the different study designs, with substantially higher rates of CAPS in observational and experimental samples than in medicinal cannabis samples. Third, not every individual exposed to cannabis is equally at risk of CAPS as the interplay between individual differences and the pharmacological properties of the cannabis likely play an important role in modulating risk. In particular, risk appears most amplified in vulnerable individuals (for example, young age, pre-existing mental health problems) and increases with higher doses of THC (as shown in experimental studies).

Rates of cannabis-associated psychotic symptoms

Summarizing the existing evidence on rates of CAPS, we find that cannabis can acutely induce CAPS in a subset of cannabis-exposed individuals, irrespective of whether it is used recreationally, administered in controlled experiments, or prescribed as a medicinal product. Importantly, rates of CAPS varied substantially across the designs. More specifically, similar rates of CAPS were reported by observational and experimental evidence (around 19% and 21% in cannabis-exposed individuals, respectively), while considerably lower rates of CAPS were documented in medicinal cannabis samples (between 1% and 2%).

A number of factors likely contribute to the apparently different rates of CAPS across the three study designs. First, rates of CAPS are not directly comparable as different, design-specific measures were used: in observational/experimental research, CAPS is typically defined as the occurrence of transient cannabis-induced psychosis-like symptoms, whereas medicinal trials screen for CAPS as the occurrence of first-rank psychotic symptoms, often resulting in treatment discontinuation. 20 , 30 , 31 As such, transient CAPS may indeed occur commonly in cannabis-exposed individuals (as evident in the higher rates in observational/experimental research), while risk of severe CAPS requiring medical attention is less frequently reported (resulting in lower reported rates in medicinal cannabis samples). This converges with our meta-analytic results, showing that severe CAPS (full psychotic episode) may occur in about 1 in 200 (0.5%) cannabis users. Another key difference between medicinal trials and experimental/observational research lies in the demographic profile of participants recruited into the studies. For example, individuals taking part in medicinal trials were substantially older (mean age: 48 years) compared with subjects taking part in observational or experimental studies (mean age: 24 and 25 years, respectively). As such, older age may have buffered some of the adverse effects reported by adolescent individuals. Finally, cannabis products used in medicinal trials contain noticeable levels of CBD (for example, Sativex, with a THC/CBD ratio of approximately 1:1), a ratio different from that typically found in street cannabis (for example, >15% THC and <1% CBD 32 ) and in the experimental studies included in our meta-analyses (pure THC). As such, the use of medicinal cannabis (as opposed to street cannabis) may constitute a somewhat safer option. However, the potentially protective effects of CBD in this context require further investigation as we did not find a consistent effect of CBD co-administration on THC-induced psychosis-like symptoms. While earlier experimental studies included in our work were suggestive of protective effects of CBD, 33 , 34 , 35 two recent studies did not replicate these findings. 36 , 37

Interestingly, lower but significant rates of CAPS were also observed in placebo groups assessed as part of THC-challenge studies (% THC  = 25% versus % placebo  = 11%) and medicinal cannabis trials (% THC  = 3% versus % placebo  = 1%), highlighting that psychotic symptoms occur not only in the context of cannabis exposure. This is in line with the notion that cannabis use can increase risk of psychosis but appears to be neither a sufficient nor necessary cause for the emergence of psychotic symptoms. 38

Predictors of CAPS

Summarizing evidence on predictors of CAPS, we found that individual vulnerabilities and the pharmacological properties of cannabis both appear to play an important role in modulating risk. Regarding the pharmacological properties of cannabis, evidence from experimental studies showed that the administration of THC increases risk of CAPS, both in a single-dose and dose-dependent manner. Given the nature of the experimental design, these effects are independent of potential confounders that bias estimates obtained from observational studies. More challenging to interpret are therefore findings on individual cannabis use histories (for example, frequency/severity of cannabis use, age of onset of use, preferred cannabis strain) as assessed in observational studies. Contrary to evidence linking high-frequency and early-onset cannabis use to long-term risk of psychosis, 39 none of these factors associated with CAPS in our study. This discrepancy may indicate that cumulative effects of THC exposure are expressed differently for long-term risk of psychosis and acute CAPS: while users accustomed to cannabis may show a more blunted acute response as a result of tolerance, they are nevertheless at a higher risk of developing the clinical manifestation of psychosis in the long run. 38

We also tested a number of meta-analytical models for predictors tapping into demographic and mental health dimensions. Interestingly, among the assessed demographic factors, only age and gender associated with CAPS, with younger and female individuals reporting increased levels of CAPS. Other factors often linked to mental health, such as education or socioeconomic status, were not related to CAPS. Concerning predictors indexing mental health, we found converging evidence showing that a predisposition to psychosis increased the risk of experiencing CAPS. In addition, individuals with other pre-existing mental health vulnerabilities (for example, bipolar disorder, depression, anxiety, addiction liability) also showed a higher risk of CAPS, indicating that risk may stem partly from a common vulnerability to mental health problems.

These findings align with findings from studies focusing on the biological correlates of CAPS, showing that increases in dopamine activity, a neurotransmitter implicated in the etiology of psychosis, 40 altered sensitivity to cannabis. By contrast, none of the a priori selected candidate genes (chosen mostly to index schizophrenia liability) modulated risk of CAPS. This meta-analytic finding is coherent with results from the largest available genome-wide association study on schizophrenia, 41 where none of the candidate genes reached genome-wide significance ( P  < 5 × 10 −8 ) ( Supplementary Information ). Instead, as for any complex trait, genetic risk underlying CAPS is likely to be more polygenic in nature, possibly converging on pathways as yet to be identified. As such, genetic testing companies that screen for the aforementioned genetic variants to provide their customers with an individualized risk profile (such as the Cannabis Genetic Test offered by Lobo Genetics ( https://www.lobogene.com )) are unlikely to fully capture the genetic risk underlying CAPS. Similarly, genetic counseling programs targeting specifically AKT1 allele carriers in the context of cannabis use 42 may be only of limited use when trying to reduce cannabis-associated harms.

Implications for research on cannabis use and psychosis

This work has a number of implications for future research avenues. First, experimental studies administering THC constitute the most stringent available causal inference method when studying risk of CAPS. Future studies should therefore capitalize on experimental designs to advance our understanding of the acute pharmacological effects of cannabis, in terms of standard cannabis units, 43 dose–response risk profiles, 44 the interplay of different cannabinoids, 44 , 45 and building on recent work.

Despite the value of experimental studies in causal inference, observational studies are essential to identify predictors of CAPS that cannot be experimentally manipulated (for example, age, long-term/chronic exposure to cannabis) and to strengthen external validity. However, a particular challenge for inference from observational studies results from bias due to confounding and reverse causation. Triangulating and comparing findings across study designs can therefore help to identify potential sources of bias that are specific to the different study designs. 46 For example, we observed that, despite THC dosing being robustly associated with CAPS in experimental studies, we did not find an association between cannabis use patterns (for example, high-THC cannabis strain) in observational and quasi-observational studies. This apparent inconsistency may result from THC effects that are blunted by long-term, early-onset and heavy cannabis use. For other designs, reverse causation may bias the association between cannabis use patterns and CAPS: as individuals may reduce cannabis consumption as a result of adverse acute effects, 47 the interpretation of cross-sectional estimates concerning different cannabis exposure and risk of CAPS is particularly challenging. Future observational studies should therefore exploit more robust causal inference methods (for example, THC administration in naturalistic settings 48 or within-subject comparisons controlling for time-invariant confounds 49 ) to better approximate the experimental design. In particular, innovative designs that can provide a higher temporal resolution on cannabis exposures and related experiences (for example, experience sampling, 50 assessing daily reactivity to cannabis 51 ) are a valuable addition to the causal inference toolbox for cannabis research. Applying genetically informed causal inferences such as Mendelian randomization analyses 52 can further help to triangulate findings, which would be possible once genome-wide summary results for both different cannabis use patterns and CAPS become available.

With respect to medicinal trials, it is important to note that an assessment of CAPS has not been a primary research focus. Although psychotic events are recognized as a potential adverse reaction to medicinal cannabis, 53 data on CAPS are rarely reported by medicinal trials, considering that only about 20% of medicinal cannabis randomized controlled trials screen for psychosis as a potential adverse effects. 5 As such, trials should systematically monitor CAPS, in addition to longer-term follow-ups assessing the risk of psychosis as a result of medicinal cannabis use. In particular, the use of validated instruments designed to capture more-subtle changes in CAPS should be included in trials to more adequately assess adverse reactions associated with medicinal cannabis products.

Second, with respect to factors associated with risk of CAPS, we find that these are similar to factors associated with onset of psychosis, notably pre-existing mental health vulnerabilities, 54 dose–response effects of cannabis, 55 and young age. 12 The key question deserving further attention is therefore whether CAPS constitutes, per se, a risk maker for long-term psychosis. Preliminary evidence found that in individuals with recent-onset psychosis, 37% reported to have experienced their first psychotic symptoms during cannabis intoxication. 56 Future longitudinal evidence building on this is required to determine whether subclinical cannabis-associated psychotic symptoms can help to identify users at high risk of developing psychosis in the long run. Follow-up research should also examine longitudinal trajectories of adverse cannabis-induced experiences and the distress associated with these experiences, given research suggesting that high levels of distress/persistence may constitute a marker of clinical relevance of psychotic-like experiences. 57 While few studies have explored this question in the context of CAPS, there is, for example, evidence suggesting that the level of distress caused by acute adverse reactions to cannabis may depend on the specific symptom dimension. 58 Here the highest levels of distress resulted from cannabis-associated paranoia and anxiety, rather than cannabis-associated hallucinations or experiences tapping into physical sensations (for example, body humming, numbness). In addition, some evidence highlights the re-occurring nature of CAPS in cannabis-exposed individuals. 22 , 58 Further research focusing on individuals with persisting symptoms of CAPS may therefore help to advance our knowledge concerning individual vulnerabilities underlying the development of long-term psychosis in the context of cannabis use.

Importantly, our synthesizing analysis is not immune to the sources of bias that exist for the different study designs, and our findings should therefore be considered in light of the aforementioned limitations (for example, residual confounding or reverse causation in observational studies, limited external validity in experimental studies). Nevertheless, comparing findings across the different study designs allowed us to pin down areas of inconsistency, which existed mostly with regard to cannabis-related parameters (for example, age of onset, frequency of use) and CAPS. In addition, we observed large levels of heterogeneity among most meta-analysis models, highlighting that study-specific findings may vary as a result of different sample characteristics and study methodologies. Future studies aiming to further discern potential sources of variation such as study design features (for example, treatment length in medicinal trials, route of THC administration in experimental studies), statistical modeling (for example, the type of confounding factors considered in observational research), and sample demographics (for example, age of the participants, previous experience with cannabis) are therefore essential when studying CAPS.

Conclusions

Our results demonstrate that cannabis can induce acute psychotic symptoms in individuals using cannabis for recreational or medicinal purposes. Some individuals appear to be particularly sensitive to the adverse acute effects of cannabis, notably young individuals with pre-existing mental health problems and individuals exposed to high levels of THC. Future studies should therefore monitor more closely adverse cannabis-related outcomes in vulnerable individuals as these individuals may benefit most from harm-reduction efforts.

Systematic search

A systematic literature search was performed in three databases (MEDLINE, EMBASE, and PsycInfo) following the PRISMA guidelines. 59 The final search was conducted on 6 December 2023 using 26 search terms indexing cannabis/THC and 20 terms indexing psychosis-like outcomes or cannabis-intoxication experiences (see Supplementary Information for a complete list of search terms). Search terms were chosen on the basis of terminology used in studies assessing CAPS, including observational studies (self-reported cannabis-induced psychosis-like experiences), THC-challenge studies (testing change in psychosis-like symptoms following THC administration), and medicinal studies testing the efficacy and safety of medicinal cannabis products (adverse events related to medicinal cannabis). Before screening the identified studies for inclusion, we removed non-relevant article types (reviews, case reports, comments, guidelines, editorials, letters, newspaper articles, book chapters, dissertations, conference abstracts) and duplicates using the R package revtools 60 . A senior researcher experienced in meta-analyses on cannabis use (T.S.) then reviewed all titles and abstracts for their relevance before conducting full-text screening. To reduce the risk of wrongful inclusion at the full-text screening stage, 10% of the articles selected for full-text screening were cross-checked for eligibility by a second researcher (E.M.).

Data extraction

We included all study estimates that could be used to derive rates of CAPS (the proportion of cannabis-exposed individuals reporting CAPS) or effect sizes (Cohen’s d ) for factors predicting CAPS. CAPS was defined as the occurrence of hallucinations, paranoia, and/or delusions during cannabis intoxication. These symptom-level items have been identified as the most reliable self-report measures screening for psychosis when validated against clinical interview measures. 61 Table 1 provides examples of CAPS as measured across the three different study designs. In brief, from observational studies, we extracted data if CAPS was assessed in cannabis-exposed individuals on the basis of self-report measures screening for subjective experiences while under the influence of cannabis. From experimental studies administering THC, CAPS was measured as the degree of psychotic symptom change in response to THC, either estimated from a between-subject (placebo groups versus THC group) or within-subject (pre-THC versus post-THC assessment) comparison. We also included data from natural experiments (referred to as quasi-experimental studies hereafter), where psychosis-like experiences were monitored in recreational cannabis users before and after they consumed their own cannabis products. 23 , 62 Finally, with respect to trials testing the efficacy and/or safety of medicinal cannabis products containing THC, we extracted data on adverse events, including the occurrence of psychosis, hallucinations, delusions, and/or paranoia during treatment with medicinal cannabis products. Medicinal studies that tested the effects of cannabis products not containing THC (for example, CBD only, olorinab, lenabasum) were not included.

For 10% of the included studies, data on rates and predictors of CAPS were extracted by a second researcher (W.B.), and agreement between the two extracted datasets was assessed by comparing the pooled estimates on rates and predictors of CAPS. In addition, following recommendations for improved reproducibility and transparency in meta-analytical works, 63 we provide all extracted data, the corresponding analytical scripts, and transformation information in the study repository.

Statistical analysis

Rates of caps.

We extracted the raw estimates of rates of CAPS as reported by observational, experimental, and medicinal cannabis studies. Classification of CAPS differs across the three study designs. In observational studies, occurrence of CAPS is typically defined as the experience of psychotic-like symptoms while under the influence of cannabis. In experimental studies administering THC, CAPS is commonly defined as a clinically significant change in psychotic symptom severity (for example, ≥3 points increase in Positive and Negative Syndrome Scale positive scores following THC 33 ). Finally, in medicinal cannabis samples, a binary measure of CAPS indicates whether psychotic symptoms occurred as an adverse event throughout the treatment with medicinal cannabis products. We derived rates of CAPS ( R CAPS  =  X Count of CAPS / N Sample size ) and the corresponding confidence intervals using the function BinomCI and the Clopper–Pearson method as implemented in the R package DescTools. 64 To estimate the pooled proportions, we fitted random-effects models or multilevel random-effects models as implemented in the R package metafor. 65 Multilevel random-effects models were used whenever accounting for non-independent sampling errors was necessary (further described in the following). Risk of publication bias was assessed using Peters’ test 66 and funnel plots and, if indicated ( P Peters  < 0.05), corrected using the trim-and-fill method ( Supplementary Methods ).

To derive the pooled effects of factors predicting CAPS, we converted study estimates to the standardized effect size Cohen’s d as a common metric. For studies reporting mean differences, two formulas were used for the conversion. First, for studies reporting mean differences from between-subject comparisons (independent samples), we used the following formula:

where M E and M C are the mean scores on a continuous scale (severity of CAPS), reported for individuals exposed ( M E ) and unexposed ( M C ) to a certain risk factor (for example, cannabis users with pre-existing mental health problems versus cannabis users without pre-existing mental health problems). The formula used to derive the pooled standard deviations, SD P , and the variance of Cohen’s d are listed in the Supplementary Methods . Second, an extension of the preceding formula was used to derive Cohen’s d from within-subject comparisons, comparing time-point one ( M T1 ) with time-point two ( M T2 ).The formula takes into account the dependency between the two groups: 67

where r indexes the correlation between the pairs of observations, such as the correlation between the pre- and post-THC condition in the same set of individuals for a particular outcome measure. The correlation coefficient was set to be r  = 0.5 for all studies included in the meta-analysis, on the basis of previous research. 13 We also assessed whether varying within-person correlation coefficients altered the interpretation of the results by re-estimating the pooled Cohen’s d for predictors of CAPS for two additional coefficients ( r  = 0.3 and r  = 0.7). The results were then compared with the findings obtained from the main analysis ( r  = 0.5).

From experimental studies reporting multiple time points of psychosis-like experiences following THC administration (for example, refs. 68 , 69 , 70 , 71 , 72 ), we selected the most immediate time point following THC administration. Of note, whenever studies reported test statistics instead of means (for example, t -test or F -test statistics), the preceding formula was amended to accommodate these statistics. In addition, to allow for the inclusion of studies reporting metrics other than mean comparisons (for example, regression coefficients, correlations coefficients), we converted the results to Cohen’s d using existing formulas. All formulas used in this study are provided in the Supplementary Information . Whenever studies reported non-significant results without providing sufficient data to estimate Cohen’s d ( for example, results reported only as P  > 0.05 ) , we used a conservative estimate of P  = 1 and the corresponding sample size as the input to derive Cohen’s d . Finally, if studies reported estimates in figures only, we used WebPlotDigitizer ( https://automeris.io/WebPlotDigitizer ) to extract the data. Since the conversion of estimates from one metric to another may result in loss of precision, we also extracted the original P -value estimates (whenever reported as numerical values) and assessed the level of concordance with the P values corresponding to the estimated Cohen’s d .

Next, a series of meta-analytical models were fitted, each pooling estimates of Cohen’s d that belonged to the same class of predictors (for example, estimates indexing the effect of dopaminergic function on CAPS; estimates indexing the effect of age on CAPS). A detailed description of the classification of the included predictors is provided in the Supplementary Methods . Cohen’s d estimates were pooled if at least two estimates were available for one predictor class, using one of the following models:

Aggregation models (pooling effect sizes coming from the same underlying sample)

Random-effects models (pooling effect sizes coming from independent samples)

Multilevel random-effects models (pooling effect sizes coming from both independent and non-independent samples)

Predictors that could not meaningfully be grouped were not included in meta-analytical models but are, for completeness, reported as individual study estimates in the Supplementary Information . Levels of heterogeneity for each meta-analytical model were explored using the I 2 statistic, 73 indexing the contribution of study heterogeneity to the total variance. Here, I 2  > 30% represents moderate heterogeneity and I 2  > 50% represents substantial heterogeneity. Risk of publication bias was assessed visually using funnel plots alongside the application of Egger’s test to test for funnel-plot asymmetry. This test was performed for meta-analytical models containing at least six effect estimates. 74 The trim-and-fill 75 method was used whenever risk of publication bias was indicated ( P Egger  < 0.05). To assess whether outliers distorted the conclusions of the meta-analytical models, we applied leave-one-out and outlier analysis 76 as implemented in the R package dmetar, 77 where a pooled estimate was re-calculated after omitting studies that deviated from the pooled estimate. Further details on all applied sensitivity analyses are provided in the Supplementary Methods .

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Data availability

The data are publicly available via GitHub at github.com/TabeaSchoeler/TS2023_MetaCAPS .

Code availability

All analytical code used to analyze, summarize, and present the data is accessible via GitHub at github.com/TabeaSchoeler/TS2023_MetaCAPS .

World Drug Report 2022 (UNODC, 2022); https://www.unodc.org/unodc/en/data-and-analysis/wdr-2022_booklet-3.html

Turna, J. et al. Overlapping patterns of recreational and medical cannabis use in a large community sample of cannabis users. Compr. Psychiatry 102 , 152188 (2020).

Article   PubMed   Google Scholar  

Rhee, T. G. & Rosenheck, R. A. Increasing use of cannabis for medical purposes among US residents, 2013–2020. Am. J. Prev. Med. 65 , 528–533 (2023).

Green, B., Kavanagh, D. & Young, R. Being stoned: a review of self-reported cannabis effects. Drug Alcohol Rev. 22 , 453–460 (2003).

Whiting, P. F. et al. Cannabinoids for medical use. JAMA. 313 , 2456 (2015).

Callaghan, R. C. et al. Associations between Canada’s cannabis legalization and emergency department presentations for transient cannabis-induced psychosis and schizophrenia conditions: Ontario and Alberta, 2015–2019. Can. J. Psychiatry 67 , 616–625 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Manthey, J., Freeman, T. P., Kilian, C., López-Pelayo, H. & Rehm, J. Public health monitoring of cannabis use in Europe: prevalence of use, cannabis potency, and treatment rates. Lancet Reg. Health Eur. 10 , 100227 (2021).

Pratt, M. et al. Benefits and harms of medical cannabis: a scoping review of systematic reviews. Syst. Rev. 8 , 320 (2019).

McGee, R., Williams, S., Poulton, R. & Moffitt, T. A longitudinal study of cannabis use and mental health from adolescence to early adulthood. Addiction 95 , 491–503 (2000).

Large, M., Sharma, S., Compton, M. T., Slade, T. & Nielssen, O. Cannabis use and earlier onset of psychosis. Arch. Gen. Psychiatry 68 , 555 (2011).

Marconi, A., Di Forti, M., Lewis, C. M., Murray, R. M. & Vassos, E. Meta-analysis of the association between the level of cannabis use and risk of psychosis. Schizophr. Bull. 42 , 1262–1269 (2016).

Hasan, A. et al. Cannabis use and psychosis: a review of reviews. Eur. Arch. Psychiatry Clin. Neurosci. 270 , 403–412 (2020).

Hindley, G. et al. Psychiatric symptoms caused by cannabis constituents: a systematic review and meta-analysis. Lancet Psychiatry 7 , 344–353 (2020).

Sexton, M., Cuttler, C. & Mischley, L. K. A survey of cannabis acute effects and withdrawal symptoms: differential responses across user types and age. J. Altern. Complement. Med. 25 , 326–335 (2019).

Schoeler, T., Ferris, J. & Winstock, A. R. Rates and correlates of cannabis-associated psychotic symptoms in over 230,000 people who use cannabis. Transl. Psychiatry 12 , 369 (2022).

Winstock, A., Lynskey, M., Borschmann, R. & Waldron, J. Risk of emergency medical treatment following consumption of cannabis or synthetic cannabinoids in a large global sample. J. Psychopharmacol. 29 , 698–703 (2015).

Kaufmann, R. M. et al. Acute psychotropic effects of oral cannabis extract with a defined content of Δ9-tetrahydrocannabinol (THC) in healthy volunteers. Pharmacopsychiatry 43 , 24–32 (2010).

Cameron, C., Watson, D. & Robinson, J. Use of a synthetic cannabinoid in a correctional population for posttraumatic stress disorder-related insomnia and nightmares, chronic pain, harm reduction, and other indications. J. Clin. Psychopharmacol. 34 , 559–564 (2014).

Aviram, J. et al. Medical cannabis treatment for chronic pain: outcomes and prediction of response. Eur. J. Pain 25 , 359–374 (2021).

Serpell, M. G., Notcutt, W. & Collin, C. Sativex long-term use: an open-label trial in patients with spasticity due to multiple sclerosis. J. Neurol. 260 , 285–295 (2013).

Colizzi, M. et al. Delta-9-tetrahydrocannabinol increases striatal glutamate levels in healthy individuals: implications for psychosis. Mol. Psychiatry. 25 , 3231–3240 (2020).

Bianconi, F. et al. Differences in cannabis-related experiences between patients with a first episode of psychosis and controls. Psychol. Med. 46 , 995–1003 (2016).

Valerie Curran, H. et al. Which biological and self-report measures of cannabis use predict cannabis dependency and acute psychotic-like effects? Psychol. Med. 49 , 1574–1580 (2019).

Kleinloog, D., Roozen, F., De Winter, W., Freijer, J. & Van Gerven, J. Profiling the subjective effects of Δ9-tetrahydrocannabinol using visual analogue scales. Int. J. Methods Psychiatr. Res. 23 , 245–256 (2014).

Ganesh, S. et al. Psychosis-relevant effects of intravenous delta-9-tetrahydrocannabinol: a mega analysis of individual participant-data from human laboratory studies. Int. J. Neuropsychopharmacol. 23 , 559–570 (2020).

Kendler, K. S., Ohlsson, H., Sundquist, J. & Sundquist, K. Prediction of onset of substance-induced psychotic disorder and its progression to schizophrenia in a Swedish national sample. Am. J. Psychiatry 176 , 711–719 (2019).

Arendt, M., Rosenberg, R., Foldager, L., Perto, G. & Munk-Jørgensen, P. Cannabis-induced psychosis and subsequent schizophrenia-spectrum disorders: follow-up study of 535 incident cases. Br. J. Psychiatry 187 , 510–515 (2005).

Kleinloog, D. et al. Does olanzapine inhibit the psychomimetic effects of Δ9-tetrahydrocannabinol? J. Psychopharmacol. 26 , 1307–1316 (2012).

Liem-Moolenaar, M. et al. Central nervous system effects of haloperidol on THC in healthy male volunteers. J. Psychopharmacol. 24 , 1697–1708 (2010).

Patti, F. et al. Efficacy and safety of cannabinoid oromucosal spray for multiple sclerosis spasticity. J. Neurol. Neurosurg. Psychiatry 87 , 944–951 (2016).

Thaler, A. et al. Single center experience with medical cannabis in Gilles de la Tourette syndrome. Parkinsonism Relat. Disord . 61 , 211–213 (2019).

Chandra, S. et al. New trends in cannabis potency in USA and Europe during the last decade (2008–2017). Eur. Arch. Psychiatry Clin. Neurosci. 269 , 5–15 (2019).

Englund, A. et al. Cannabidiol inhibits THC-elicited paranoid symptoms and hippocampal-dependent memory impairment. J. Psychopharmacol. 27 , 19–27 (2013).

Gibson, L. P. et al. Effects of cannabidiol in cannabis flower: implications for harm reduction. Addict. Biol. 27 , e13092 (2022).

Sainz-Cort, A. et al. The effects of cannabidiol and delta-9-tetrahydrocannabinol in social cognition: a naturalistic controlled study. Cannabis Cannabinoid Res . https://doi.org/10.1089/can.2022.0037 (2022).

Lawn, W. et al. The acute effects of cannabis with and without cannabidiol in adults and adolescents: a randomised, double‐blind, placebo‐controlled, crossover experiment. Addiction 118 , 1282–1294 (2023).

Englund, A. et al. Does cannabidiol make cannabis safer? A randomised, double-blind, cross-over trial of cannabis with four different CBD:THC ratios. Neuropsychopharmacology 48 , 869–876 (2023).

Arseneault, L., Cannon, M., Witton, J. & Murray, R. M. Causal association between cannabis and psychosis: examination of the evidence. Br. J. Psychiatry 184 , 110–117 (2004).

Di Forti, M. et al. The contribution of cannabis use to variation in the incidence of psychotic disorder across Europe (EU-GEI): a multicentre case-control study. Lancet Psychiatry 6 , 427–436 (2019).

McCutcheon, R. A., Abi-Dargham, A. & Howes, O. D. Schizophrenia, dopamine and the striatum: from biology to symptoms. Trends Neurosci. 42 , 205–220 (2019).

Trubetskoy, V. et al. Mapping genomic loci implicates genes and synaptic biology in schizophrenia. Nature 604 , 502–508 (2022).

Zwicker, A. et al. Genetic counselling for the prevention of mental health consequences of cannabis use: a randomized controlled trial‐within‐cohort. Early Interv. Psychiatry 15 , 1306–1314 (2021).

Hindocha, C., Norberg, M. M. & Tomko, R. L. Solving the problem of cannabis quantification. Lancet Psychiatry 5 , e8 (2018).

Englund, A. et al. The effect of five day dosing with THCV on THC-induced cognitive, psychological and physiological effects in healthy male human volunteers: a placebo-controlled, double-blind, crossover pilot trial. J. Psychopharmacol. 30 , 140–151 (2016).

Wall, M. B. et al. Individual and combined effects of cannabidiol and Δ9-tetrahydrocannabinol on striato-cortical connectivity in the human brain. J. Psychopharmacol. 36 , 732–744 (2022).

Hammerton, G. & Munafò, M. R. Causal inference with observational data: the need for triangulation of evidence. Psychol. Med. 51 , 563–578 (2021).

Sami, M., Notley, C., Kouimtsidis, C., Lynskey, M. & Bhattacharyya, S. Psychotic-like experiences with cannabis use predict cannabis cessation and desire to quit: a cannabis discontinuation hypothesis. Psychol. Med. 49 , 103–112 (2019).

Morgan, C. J. A., Schafer, G., Freeman, T. P. & Curran, H. V. Impact of cannabidiol on the acute memory and psychotomimetic effects of smoked cannabis: naturalistic study. Br. J. Psychiatry 197 , 285–290 (2010).

Schoeler, T. et al. Association between continued cannabis use and risk of relapse in first-episode psychosis: a quasi-experimental investigation within an observational study. JAMA Psychiatry 73 , 1173–1179 (2016).

Sznitman, S., Baruch, Y. Ben, Greene, T. & Gelkopf, M. The association between physical pain and cannabis use in daily life: an experience sampling method. Drug Alcohol Depend. 191 , 294–299 (2018).

Henquet, C. et al. Psychosis reactivity to cannabis use in daily life: an experience sampling study. Br. J. Psychiatry 196 , 447–453 (2010).

Pingault, J.-B. et al. Using genetic data to strengthen causal inference in observational research. Nat. Rev. Genet. 19 , 566–580 (2018).

Hill, K. P. Medical cannabis. JAMA 323 , 580 (2020).

Esterberg, M. L., Trotman, H. D., Holtzman, C., Compton, M. T. & Walker, E. F. The impact of a family history of psychosis on age-at-onset and positive and negative symptoms of schizophrenia: a meta-analysis. Schizophr. Res. 120 , 121–130 (2010).

Di Forti, M. et al. Proportion of patients in south London with first-episode psychosis attributable to use of high potency cannabis: a case-control study. Lancet Psychiatry 2 , 233–238 (2015).

Peters, B. D. et al. Subjective effects of cannabis before the first psychotic episode. Aust. N. Z. J. Psychiatry 43 , 1155–1162 (2009).

Karcher, N. R. et al. Persistent and distressing psychotic-like experiences using adolescent brain cognitive development study data. Mol. Psychiatry 27 , 1490–1501 (2022).

LaFrance, E. M., Stueber, A., Glodosky, N. C., Mauzay, D. & Cuttler, C. Overbaked: assessing and predicting acute adverse reactions to cannabis. J. Cannabis Res. 2 , 3 (2020).

Moher, D., Liberati, A., Tetzlaff, J. & Altman, D. G. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Brit. Med. J. 339 , b2535 (2009).

Westgate, M. J. revtools: an R package to support article screening for evidence synthesis. Res. Synth. Methods. 10 , 606–614 (2019).

Kelleher, I., Harley, M., Murtagh, A. & Cannon, M. Are screening instruments valid for psychotic-like experiences? A validation study of screening questions for psychotic-like experiences using in-depth clinical interview. Schizophr. Bull. 37 , 362–369 (2011).

Morgan, C. J. A., Freeman, T. P., Powell, J. & Curran, H. V. AKT1 genotype moderates the acute psychotomimetic effects of naturalistically smoked cannabis in young cannabis smokers. Transl. Psychiatry 6 , e738 (2016).

Ivimey‐Cook, E. R., Noble, D. W. A., Nakagawa, S., Lajeunesse, M. J. & Pick, J. L. Advice for improving the reproducibility of data extraction in meta‐analysis. Res. Synth. Methods. 14 , 911–915 (2023).

Signorell, A. et al. DescTools: Tools for Descriptive Statistics R Package version 0.99 https://cran.r-project.org/web/packages/DescTools/index.html (2019).

Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw . https://doi.org/10.18637/jss.v036.i03 (2010).

Peters, J. L. Comparison of two methods to detect publication bias in meta-analysis. JAMA 295 , 676–680 (2006).

Borenstein, M., Hedges, L. V., Higgins, J. P. T. & Rothstein, H. R. in Introduction to Meta-Analysis 225–238 (John Wiley & Sons, 2009); https://doi.org/10.1002/9780470743386.ch24

Mason, O. et al. Acute cannabis use causes increased psychotomimetic experiences in individuals prone to psychosis. Psychol. Med. 39 , 951–956 (2009).

D’Souza, D. C. et al. Delta-9-tetrahydrocannabinol effects in schizophrenia: implications for cognition, psychosis, and addiction. Biol. Psychiatry 57 , 594–608 (2005).

Solowij, N. et al. A randomised controlled trial of vaporised Δ9-tetrahydrocannabinol and cannabidiol alone and in combination in frequent and infrequent cannabis users: acute intoxication effects. Eur. Arch. Psychiatry Clin. Neurosci. 269 , 17–35 (2019).

Vadhan, N. P., Corcoran, C. M., Bedi, G., Keilp, J. G. & Haney, M. Acute effects of smoked marijuana in marijuana smokers at clinical high-risk for psychosis: a preliminary study. Psychiatry Res. 257 , 372–374 (2017).

Radhakrishnan, R. et al. GABA deficits enhance the psychotomimetic effects of Δ9-THC. Neuropsychopharmacology 40 , 2047–2056 (2015).

Higgins, J. P. T. & Thompson, S. G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 21 , 1539–1558 (2002).

Tang, J.-L. & Liu, J. L. Misleading funnel plot for detection of bias in meta-analysis. J. Clin. Epidemiol. 53 , 477–484 (2000).

Duval, S. & Tweedie, R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56 , 455–463 (2000).

Viechtbauer, W. & Cheung, M. W.-L. Outlier and influence diagnostics for meta-analysis. Res. Synth. Methods 1 , 112–125 (2010).

Harrer, M., Cuijpers, P., Furukawa, T. & Ebert, D. D. dmetar: Companion R Package for the Guide ’Doing Meta-Analysis in R’ R package version 00.9000 http://dmetar.protectlab.org/ (2019).

Thomas, H. A community survey of adverse effects of cannabis use. Drug Alcohol Depend. 42 , 201–207 (1996).

Olsson, F. et al. An observational study of safety and clinical outcome measures across patient groups in the United Kingdom Medical Cannabis Registry. Expert Rev. Clin. Pharmacol. 16 , 257–266 (2023).

Arendt, M. et al. Testing the self-medication hypothesis of depression and aggression in cannabis-dependent subjects. Psychol. Med. 37 , 935–945 (2007).

Bonn-Miller, M. O. et al. The short-term impact of 3 smoked cannabis preparations versus placebo on PTSD symptoms: a randomized cross-over clinical trial. PLoS ONE 16 , e0246990 (2021).

Stokes, P. R. A., Mehta, M. A., Curran, H. V., Breen, G. & Grasby Paul, R. A. Can recreational doses of THC produce significant dopamine release in the human striatum? Neuroimage 48 , 186–190 (2009).

Zuurman, L. et al. Effect of intrapulmonary tetrahydrocannabinol administration in humans. J. Psychopharmacol. 22 , 707–716 (2008).

Safakish, R. et al. Medical cannabis for the management of pain and quality of life in chronic pain patients: a prospective observational study. Pain Med. 21 , 3073–3086 (2020).

Favrat, B. et al. Two cases of ‘cannabis acute psychosis’ following the administration of oral cannabis. BMC Psychiatry 5 , 17 (2005).

Balash, Y. et al. Medical cannabis in Parkinson disease: real-life patients' experience. Clin. Neuropharmacol. 40 , 268–272 (2017).

Habib, G. & Levinger, U. Characteristics of medical cannabis usage among patients with fibromyalgia. Harefuah 159 , 343–348 (2020).

PubMed   Google Scholar  

Beaulieu, P. Effects of nabilone, a synthetic cannabinoid, on postoperative pain. Can J. Anesth. 53 , 769–775 (2006).

Rup, J., Freeman, T. P., Perlman, C. & Hammond, D. Cannabis and mental health: adverse outcomes and self-reported impact of cannabis use by mental health status. Subst. Use Misuse 57 , 719–729 (2022).

Download references

Acknowledgments

This research was funded in whole, or in part, by the Wellcome Trust (grant nos. 218641/Z/19/Z (to T.S.) and 215917/Z/19/Z (to J.R.B.)). For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. J.-B.P. is funded by the Medical Research Foundation 2018 Emerging Leaders First Prize in Adolescent Mental Health (MRF-160-0002-ELP-PINGA (to J.-B.P.)). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Computational Biology, University of Lausanne, Lausanne, Switzerland

Tabea Schoeler

Department of Clinical, Educational and Health Psychology, Division of Psychology and Language Sciences, University College London, London, UK

Tabea Schoeler, Jessie R. Baldwin, Ellen Martin, Wikus Barkhuizen & Jean-Baptiste Pingault

Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, UK

Jessie R. Baldwin & Jean-Baptiste Pingault

You can also search for this author in PubMed   Google Scholar

Contributions

T.S., J.R.B., and J.-B.P. conceived and designed the study. T.S., E.M., and W.B. acquired the data. T.S. analyzed the data and drafted the paper. All authors (T.S., J.R.B., E.M., W.B., and J.-B.P.) reviewed and approved the manuscript.

Corresponding author

Correspondence to Tabea Schoeler .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Mental Health thanks Evangelos Vassos and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–7, Methods (literature search, estimation of Cohen’s d , classification of predictors of CAPS, analysis plan), and references.

Reporting Summary

Supplementary tables.

Supplementary Tables 1–5.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Schoeler, T., Baldwin, J.R., Martin, E. et al. Assessing rates and predictors of cannabis-associated psychotic symptoms across observational, experimental and medical research. Nat. Mental Health (2024). https://doi.org/10.1038/s44220-024-00261-x

Download citation

Received : 06 September 2023

Accepted : 26 April 2024

Published : 03 June 2024

DOI : https://doi.org/10.1038/s44220-024-00261-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

experimental design in medical research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.11(2); 2019 Feb

Logo of cureus

Planning and Conducting Clinical Research: The Whole Process

Boon-how chew.

1 Family Medicine, Universiti Putra Malaysia, Serdang, MYS

The goal of this review was to present the essential steps in the entire process of clinical research. Research should begin with an educated idea arising from a clinical practice issue. A research topic rooted in a clinical problem provides the motivation for the completion of the research and relevancy for affecting medical practice changes and improvements. The research idea is further informed through a systematic literature review, clarified into a conceptual framework, and defined into an answerable research question. Engagement with clinical experts, experienced researchers, relevant stakeholders of the research topic, and even patients can enhance the research question’s relevance, feasibility, and efficiency. Clinical research can be completed in two major steps: study designing and study reporting. Three study designs should be planned in sequence and iterated until properly refined: theoretical design, data collection design, and statistical analysis design. The design of data collection could be further categorized into three facets: experimental or non-experimental, sampling or census, and time features of the variables to be studied. The ultimate aims of research reporting are to present findings succinctly and timely. Concise, explicit, and complete reporting are the guiding principles in clinical studies reporting.

Introduction and background

Medical and clinical research can be classified in many different ways. Probably, most people are familiar with basic (laboratory) research, clinical research, healthcare (services) research, health systems (policy) research, and educational research. Clinical research in this review refers to scientific research related to clinical practices. There are many ways a clinical research's findings can become invalid or less impactful including ignorance of previous similar studies, a paucity of similar studies, poor study design and implementation, low test agent efficacy, no predetermined statistical analysis, insufficient reporting, bias, and conflicts of interest [ 1 - 4 ]. Scientific, ethical, and moral decadence among researchers can be due to incognizant criteria in academic promotion and remuneration and too many forced studies by amateurs and students for the sake of research without adequate training or guidance [ 2 , 5 - 6 ]. This article will review the proper methods to conduct medical research from the planning stage to submission for publication (Table ​ (Table1 1 ).

a Feasibility and efficiency are considered during the refinement of the research question and adhered to during data collection.

ConceptResearch IdeaResearch QuestionAcquiring DataAnalysisPublicationPractice
ActionsRelevant clinical problem or issuePrimary or secondaryMeasuringPrespecifiedWriting skillsGuidelines
Literature reviewQuantitative or qualitativeMeasuring toolPredeterminedGuidelinesProtocol
Conceptual frameworkCausal or non-causalMeasurementExploratory allowedJournal selectionPolicy
Collaboration with expertsFeasibility Feasibility Strength and direction of the effect estimateResponse to reviewers’ commentsChange
Seek target population’s opinions on the research topicEfficiency Efficiency    
 Theoretical DesignData Collection DesignStatistical design  
 Domain (external validity)Experimental or non-experimentalData cleaning  
 Valid (confounding minimized)Sampling or censusOutlier  
 Precise (good sample size)Time featuresMissing data  
 Pilot study Descriptive  
   Inferential  
   Statistical assumptions  
   Collaboration with statistician  

Epidemiologic studies in clinical and medical fields focus on the effect of a determinant on an outcome [ 7 ]. Measurement errors that happen systematically give rise to biases leading to invalid study results, whereas random measurement errors will cause imprecise reporting of effects. Precision can usually be increased with an increased sample size provided biases are avoided or trivialized. Otherwise, the increased precision will aggravate the biases. Because epidemiologic, clinical research focuses on measurement, measurement errors are addressed throughout the research process. Obtaining the most accurate estimate of a treatment effect constitutes the whole business of epidemiologic research in clinical practice. This is greatly facilitated by clinical expertise and current scientific knowledge of the research topic. Current scientific knowledge is acquired through literature reviews or in collaboration with an expert clinician. Collaboration and consultation with an expert clinician should also include input from the target population to confirm the relevance of the research question. The novelty of a research topic is less important than the clinical applicability of the topic. Researchers need to acquire appropriate writing and reporting skills from the beginning of their careers, and these skills should improve with persistent use and regular reviewing of published journal articles. A published clinical research study stands on solid scientific ground to inform clinical practice given the article has passed through proper peer-reviews, revision, and content improvement.

Systematic literature reviews

Systematic literature reviews of published papers will inform authors of the existing clinical evidence on a research topic. This is an important step to reduce wasted efforts and evaluate the planned study [ 8 ]. Conducting a systematic literature review is a well-known important step before embarking on a new study [ 9 ]. A rigorously performed and cautiously interpreted systematic review that includes in-process trials can inform researchers of several factors [ 10 ]. Reviewing the literature will inform the choice of recruitment methods, outcome measures, questionnaires, intervention details, and statistical strategies – useful information to increase the study’s relevance, value, and power. A good review of previous studies will also provide evidence of the effects of an intervention that may or may not be worthwhile; this would suggest either no further studies are warranted or that further study of the intervention is needed. A review can also inform whether a larger and better study is preferable to an additional small study. Reviews of previously published work may yield few studies or low-quality evidence from small or poorly designed studies on certain intervention or observation; this may encourage or discourage further research or prompt consideration of a first clinical trial.

Conceptual framework

The result of a literature review should include identifying a working conceptual framework to clarify the nature of the research problem, questions, and designs, and even guide the latter discussion of the findings and development of possible solutions. Conceptual frameworks represent ways of thinking about a problem or how complex things work the way they do [ 11 ]. Different frameworks will emphasize different variables and outcomes, and their inter-relatedness. Each framework highlights or emphasizes different aspects of a problem or research question. Often, any single conceptual framework presents only a partial view of reality [ 11 ]. Furthermore, each framework magnifies certain elements of the problem. Therefore, a thorough literature search is warranted for authors to avoid repeating the same research endeavors or mistakes. It may also help them find relevant conceptual frameworks including those that are outside one’s specialty or system. 

Conceptual frameworks can come from theories with well-organized principles and propositions that have been confirmed by observations or experiments. Conceptual frameworks can also come from models derived from theories, observations or sets of concepts or even evidence-based best practices derived from past studies [ 11 ].

Researchers convey their assumptions of the associations of the variables explicitly in the conceptual framework to connect the research to the literature. After selecting a single conceptual framework or a combination of a few frameworks, a clinical study can be completed in two fundamental steps: study design and study report. Three study designs should be planned in sequence and iterated until satisfaction: the theoretical design, data collection design, and statistical analysis design [ 7 ]. 

Study designs

Theoretical Design

Theoretical design is the next important step in the research process after a literature review and conceptual framework identification. While the theoretical design is a crucial step in research planning, it is often dealt with lightly because of the more alluring second step (data collection design). In the theoretical design phase, a research question is designed to address a clinical problem, which involves an informed understanding based on the literature review and effective collaboration with the right experts and clinicians. A well-developed research question will have an initial hypothesis of the possible relationship between the explanatory variable/exposure and the outcome. This will inform the nature of the study design, be it qualitative or quantitative, primary or secondary, and non-causal or causal (Figure ​ (Figure1 1 ).

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i01.jpg

A study is qualitative if the research question aims to explore, understand, describe, discover or generate reasons underlying certain phenomena. Qualitative studies usually focus on a process to determine how and why things happen [ 12 ]. Quantitative studies use deductive reasoning, and numerical statistical quantification of the association between groups on data often gathered during experiments [ 13 ]. A primary clinical study is an original study gathering a new set of patient-level data. Secondary research draws on the existing available data and pooling them into a larger database to generate a wider perspective or a more powerful conclusion. Non-causal or descriptive research aims to identify the determinants or associated factors for the outcome or health condition, without regard for causal relationships. Causal research is an exploration of the determinants of an outcome while mitigating confounding variables. Table ​ Table2 2 shows examples of non-causal (e.g., diagnostic and prognostic) and causal (e.g., intervention and etiologic) clinical studies. Concordance between the research question, its aim, and the choice of theoretical design will provide a strong foundation and the right direction for the research process and path. 

Research Category Study Title
Diagnostic Plasma Concentration of B-type Natriuretic Peptide (BNP) in the Diagnosis of Left Ventricular Dysfunction
The Centor and McIsaac Scores and the Group A Streptococcal Pharyngitis
Prognostic The Apgar Score and Infant Mortality
SCORE (Systematic COronary Risk Evaluation) for the Estimation of Ten-Year Risk of Fatal Cardiovascular Disease
Intervention Dexamethasone in Very Low Birth Weight Infants
Bariatric Surgery of Obesity in Type 2 Diabetes and Metabolic Syndrome
Etiologic Thalidomide and Reduction Deformities of the Limbs
Work Stress and Risk of Cardiovascular Mortality

A problem in clinical epidemiology is phrased in a mathematical relationship below, where the outcome is a function of the determinant (D) conditional on the extraneous determinants (ED) or more commonly known as the confounding factors [ 7 ]:

For non-causal research, Outcome = f (D1, D2…Dn) For causal research, Outcome = f (D | ED)

A fine research question is composed of at least three components: 1) an outcome or a health condition, 2) determinant/s or associated factors to the outcome, and 3) the domain. The outcome and the determinants have to be clearly conceptualized and operationalized as measurable variables (Table ​ (Table3; 3 ; PICOT [ 14 ] and FINER [ 15 ]). The study domain is the theoretical source population from which the study population will be sampled, similar to the wording on a drug package insert that reads, “use this medication (study results) in people with this disease” [ 7 ].

Acronym Explanation
P = Patient (or the domain)
I = Intervention or treatment (or the determinants in non-experimental)
C = Comparison (only in experimental)
O = Outcome
T = Time describes the duration of data collection
F = Feasible with the current and/or potential available resources
I = Important and interesting to current clinical practice and to you, respectively
N = Novel and adding to the existing corpus of scientific knowledge
E = Ethical research conducted without harm to participants and institutions
R = Relevant to as many parties as possible, not only to your own practice

The interpretation of study results as they apply to wider populations is known as generalization, and generalization can either be statistical or made using scientific inferences [ 16 ]. Generalization supported by statistical inferences is seen in studies on disease prevalence where the sample population is representative of the source population. By contrast, generalizations made using scientific inferences are not bound by the representativeness of the sample in the study; rather, the generalization should be plausible from the underlying scientific mechanisms as long as the study design is valid and nonbiased. Scientific inferences and generalizations are usually the aims of causal studies. 

Confounding: Confounding is a situation where true effects are obscured or confused [ 7 , 16 ]. Confounding variables or confounders affect the validity of a study’s outcomes and should be prevented or mitigated in the planning stages and further managed in the analytical stages. Confounders are also known as extraneous determinants in epidemiology due to their inherent and simultaneous relationships to both the determinant and outcome (Figure ​ (Figure2), 2 ), which are usually one-determinant-to-one outcome in causal clinical studies. The known confounders are also called observed confounders. These can be minimized using randomization, restriction, or a matching strategy. Residual confounding has occurred in a causal relationship when identified confounders were not measured accurately. Unobserved confounding occurs when the confounding effect is present as a variable or factor not observed or yet defined and, thus, not measured in the study. Age and gender are almost universal confounders followed by ethnicity and socio-economic status.

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i02.jpg

Confounders have three main characteristics. They are a potential risk factor for the disease, associated with the determinant of interest, and should not be an intermediate variable between the determinant and the outcome or a precursor to the determinant. For example, a sedentary lifestyle is a cause for acute coronary syndrome (ACS), and smoking could be a confounder but not cardiorespiratory unfitness (which is an intermediate factor between a sedentary lifestyle and ACS). For patients with ACS, not having a pair of sports shoes is not a confounder – it is a correlate for the sedentary lifestyle. Similarly, depression would be a precursor, not a confounder.

Sample size consideration: Sample size calculation provides the required number of participants to be recruited in a new study to detect true differences in the target population if they exist. Sample size calculation is based on three facets: an estimated difference in group sizes, the probability of α (Type I) and β (Type II) errors chosen based on the nature of the treatment or intervention, and the estimated variability (interval data) or proportion of the outcome (nominal data) [ 17 - 18 ]. The clinically important effect sizes are determined based on expert consensus or patients’ perception of benefit. Value and economic consideration have increasingly been included in sample size estimations. Sample size and the degree to which the sample represents the target population affect the accuracy and generalization of a study’s reported effects. 

Pilot study: Pilot studies assess the feasibility of the proposed research procedures on small sample size. Pilot studies test the efficiency of participant recruitment with minimal practice or service interruptions. Pilot studies should not be conducted to obtain a projected effect size for a larger study population because, in a typical pilot study, the sample size is small, leading to a large standard error of that effect size. This leads to bias when projected for a large population. In the case of underestimation, this could lead to inappropriately terminating the full-scale study. As the small pilot study is equally prone to bias of overestimation of the effect size, this would lead to an underpowered study and a failed full-scale study [ 19 ]. 

The Design of Data Collection

The “perfect” study design in the theoretical phase now faces the practical and realistic challenges of feasibility. This is the step where different methods for data collection are considered, with one selected as the most appropriate based on the theoretical design along with feasibility and efficiency. The goal of this stage is to achieve the highest possible validity with the lowest risk of biases given available resources and existing constraints. 

In causal research, data on the outcome and determinants are collected with utmost accuracy via a strict protocol to maximize validity and precision. The validity of an instrument is defined as the degree of fidelity of the instrument, measuring what it is intended to measure, that is, the results of the measurement correlate with the true state of an occurrence. Another widely used word for validity is accuracy. Internal validity refers to the degree of accuracy of a study’s results to its own study sample. Internal validity is influenced by the study designs, whereas the external validity refers to the applicability of a study’s result in other populations. External validity is also known as generalizability and expresses the validity of assuming the similarity and comparability between the study population and the other populations. Reliability of an instrument denotes the extent of agreeableness of the results of repeated measurements of an occurrence by that instrument at a different time, by different investigators or in a different setting. Other terms that are used for reliability include reproducibility and precision. Preventing confounders by identifying and including them in data collection will allow statistical adjustment in the later analyses. In descriptive research, outcomes must be confirmed with a referent standard, and the determinants should be as valid as those found in real clinical practice.

Common designs for data collection include cross-sectional, case-control, cohort, and randomized controlled trials (RCTs). Many other modern epidemiology study designs are based on these classical study designs such as nested case-control, case-crossover, case-control without control, and stepwise wedge clustered RCTs. A cross-sectional study is typically a snapshot of the study population, and an RCT is almost always a prospective study. Case-control and cohort studies can be retrospective or prospective in data collection. The nested case-control design differs from the traditional case-control design in that it is “nested” in a well-defined cohort from which information on the cohorts can be obtained. This design also satisfies the assumption that cases and controls represent random samples of the same study base. Table ​ Table4 4 provides examples of these data collection designs.

Data Collection DesignsStudy Title
Cross-sectionalThe National Health and Morbidity Survey (NHMS)
The National Health and Nutrition Examination Survey (NHANES)
CohortFramingham Heart Study
The Malaysian Cohort (TMC) project
Case-controlA Case-Control Study of the Effectiveness of Bicycle Safety Helmets
Open-Angle Glaucoma and Ocular Hypertension: the Long Island Glaucoma Case-Control Study
Nested case-controlNurses' Health Study on Plasma Adipokines and Endometriosis Risk
Physicians' Health Study Plasma Homocysteine and Risk of Myocardial Infarction
Randomized controlled trialThe Women’s Health Initiative
U.K. Prospective Diabetes Study
Cross-overIntranasal-agonist in Allergic Rhinitis Published in the Allergy in 2000
Effect of Palm-based Tocotrienols and Tocopherol Mixture Supplementation on Platelet Aggregation in Subjects with Metabolic Syndrome

Additional aspects in data collection: No single design of data collection for any research question as stated in the theoretical design will be perfect in actual conduct. This is because of myriad issues facing the investigators such as the dynamic clinical practices, constraints of time and budget, the urgency for an answer to the research question, and the ethical integrity of the proposed experiment. Therefore, feasibility and efficiency without sacrificing validity and precision are important considerations in data collection design. Therefore, data collection design requires additional consideration in the following three aspects: experimental/non-experimental, sampling, and timing [ 7 ]:

Experimental or non-experimental: Non-experimental research (i.e., “observational”), in contrast to experimental, involves data collection of the study participants in their natural or real-world environments. Non-experimental researches are usually the diagnostic and prognostic studies with cross-sectional in data collection. The pinnacle of non-experimental research is the comparative effectiveness study, which is grouped with other non-experimental study designs such as cross-sectional, case-control, and cohort studies [ 20 ]. It is also known as the benchmarking-controlled trials because of the element of peer comparison (using comparable groups) in interpreting the outcome effects [ 20 ]. Experimental study designs are characterized by an intervention on a selected group of the study population in a controlled environment, and often in the presence of a similar group of the study population to act as a comparison group who receive no intervention (i.e., the control group). Thus, the widely known RCT is classified as an experimental design in data collection. An experimental study design without randomization is referred to as a quasi-experimental study. Experimental studies try to determine the efficacy of a new intervention on a specified population. Table ​ Table5 5 presents the advantages and disadvantages of experimental and non-experimental studies [ 21 ].

a May be an issue in cross-sectional studies that require a long recall to the past such as dietary patterns, antenatal events, and life experiences during childhood.

Non-experimentalExperimental
Advantages
Quick results are possibleComparable groups
Relatively less costlyHawthorne and placebo effects mitigated
No recall bias Straightforward, robust statistical analysis
No time effectsConvincing results as evidence
Real-life data 
Disadvantages
Observed, unobserved, and residual confoundingExpensive
 Time-consuming
 Overly controlled environment
 Loss to follow-up
 Random allocation of potentially harmful treatment may not be ethically permissible

Once an intervention yields a proven effect in an experimental study, non-experimental and quasi-experimental studies can be used to determine the intervention’s effect in a wider population and within real-world settings and clinical practices. Pragmatic or comparative effectiveness are the usual designs used for data collection in these situations [ 22 ].

Sampling/census: Census is a data collection on the whole source population (i.e., the study population is the source population). This is possible when the defined population is restricted to a given geographical area. A cohort study uses the census method in data collection. An ecologic study is a cohort study that collects summary measures of the study population instead of individual patient data. However, many studies sample from the source population and infer the results of the study to the source population for feasibility and efficiency because adequate sampling provides similar results to the census of the whole population. Important aspects of sampling in research planning are sample size and representation of the population. Sample size calculation accounts for the number of participants needed to be in the study to discover the actual association between the determinant and outcome. Sample size calculation relies on the primary objective or outcome of interest and is informed by the estimated possible differences or effect size from previous similar studies. Therefore, the sample size is a scientific estimation for the design of the planned study.

A sampling of participants or cases in a study can represent the study population and the larger population of patients in that disease space, but only in prevalence, diagnostic, and prognostic studies. Etiologic and interventional studies do not share this same level of representation. A cross-sectional study design is common for determining disease prevalence in the population. Cross-sectional studies can also determine the referent ranges of variables in the population and measure change over time (e.g., repeated cross-sectional studies). Besides being cost- and time-efficient, cross-sectional studies have no loss to follow-up; recall bias; learning effect on the participant; or variability over time in equipment, measurement, and technician. A cross-sectional design for an etiologic study is possible when the determinants do not change with time (e.g., gender, ethnicity, genetic traits, and blood groups). 

In etiologic research, comparability between the exposed and the non-exposed groups is more important than sample representation. Comparability between these two groups will provide an accurate estimate of the effect of the exposure (risk factor) on the outcome (disease) and enable valid inference of the causal relation to the domain (the theoretical population). In a case-control study, a sampling of the control group should be taken from the same study population (study base), have similar profiles to the cases (matching) but do not have the outcome seen in the cases. Matching important factors minimizes the confounding of the factors and increases statistical efficiency by ensuring similar numbers of cases and controls in confounders’ strata [ 23 - 24 ]. Nonetheless, perfect matching is neither necessary nor achievable in a case-control study because a partial match could achieve most of the benefits of the perfect match regarding a more precise estimate of odds ratio than statistical control of confounding in unmatched designs [ 25 - 26 ]. Moreover, perfect or full matching can lead to an underestimation of the point estimates [ 27 - 28 ].

Time feature: The timing of data collection for the determinant and outcome characterizes the types of studies. A cross-sectional study has the axis of time zero (T = 0) for both the determinant and the outcome, which separates it from all other types of research that have time for the outcome T > 0. Retrospective or prospective studies refer to the direction of data collection. In retrospective studies, information on the determinant and outcome have been collected or recorded before. In prospective studies, this information will be collected in the future. These terms should not be used to describe the relationship between the determinant and the outcome in etiologic studies. Time of exposure to the determinant, the time of induction, and the time at risk for the outcome are important aspects to understand. Time at risk is the period of time exposed to the determinant risk factors. Time of induction is the time from the sufficient exposure to the risk or causal factors to the occurrence of a disease. The latent period is when the occurrence of a disease without manifestation of the disease such as in “silence” diseases for example cancers, hypertension and type 2 diabetes mellitus which is detected from screening practices. Figure ​ Figure3 3 illustrates the time features of a variable. Variable timing is important for accurate data capture. 

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i03.jpg

The Design of Statistical Analysis

Statistical analysis of epidemiologic data provides the estimate of effects after correcting for biases (e.g., confounding factors) measures the variability in the data from random errors or chance [ 7 , 16 , 29 ]. An effect estimate gives the size of an association between the studied variables or the level of effectiveness of an intervention. This quantitative result allows for comparison and assessment of the usefulness and significance of the association or the intervention between studies. This significance must be interpreted with a statistical model and an appropriate study design. Random errors could arise in the study resulting from unexplained personal choices by the participants. Random error is, therefore, when values or units of measurement between variables change in non-concerted or non-directional manner. Conversely, when these values or units of measurement between variables change in a concerted or directional manner, we note a significant relationship as shown by statistical significance. 

Variability: Researchers almost always collect the needed data through a sampling of subjects/participants from a population instead of a census. The process of sampling or multiple sampling in different geographical regions or over different periods contributes to varied information due to the random inclusion of different participants and chance occurrence. This sampling variation becomes the focus of statistics when communicating the degree and intensity of variation in the sampled data and the level of inference in the population. Sampling variation can be influenced profoundly by the total number of participants and the width of differences of the measured variable (standard deviation). Hence, the characteristics of the participants, measurements and sample size are all important factors in planning a study.

Statistical strategy: Statistical strategy is usually determined based on the theoretical and data collection designs. Use of a prespecified statistical strategy (including the decision to dichotomize any continuous data at certain cut-points, sub-group analysis or sensitive analyses) is recommended in the study proposal (i.e., protocol) to prevent data dredging and data-driven reports that predispose to bias. The nature of the study hypothesis also dictates whether directional (one-tailed) or non-directional (two-tailed) significance tests are conducted. In most studies, two-sided tests are used except in specific instances when unidirectional hypotheses may be appropriate (e.g., in superiority or non-inferiority trials). While data exploration is discouraged, epidemiological research is, by nature of its objectives, statistical research. Hence, it is acceptable to report the presence of persistent associations between any variables with plausible underlying mechanisms during the exploration of the data. The statistical methods used to produce the results should be explicitly explained. Many different statistical tests are used to handle various kinds of data appropriately (e.g., interval vs discrete), and/or the various distribution of the data (e.g., normally distributed or skewed). For additional details on statistical explanations and underlying concepts of statistical tests, readers are recommended the references as cited in this sentence [ 30 - 31 ]. 

Steps in statistical analyses: Statistical analysis begins with checking for data entry errors. Duplicates are eliminated, and proper units should be confirmed. Extremely low, high or suspicious values are confirmed from the source data again. If this is not possible, this is better classified as a missing value. However, if the unverified suspicious data are not obviously wrong, they should be further examined as an outlier in the analysis. The data checking and cleaning enables the analyst to establish a connection with the raw data and to anticipate possible results from further analyses. This initial step involves descriptive statistics that analyze central tendency (i.e., mode, median, and mean) and dispersion (i.e., (minimum, maximum, range, quartiles, absolute deviation, variance, and standard deviation) of the data. Certain graphical plotting such as scatter plot, a box-whiskers plot, histogram or normal Q-Q plot are helpful at this stage to verify data normality in distribution. See Figure ​ Figure4 4 for the statistical tests available for analyses of different types of data.

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i04.jpg

Once data characteristics are ascertained, further statistical tests are selected. The analytical strategy sometimes involves the transformation of the data distribution for the selected tests (e.g., log, natural log, exponential, quadratic) or for checking the robustness of the association between the determinants and their outcomes. This step is also referred to as inferential statistics whereby the results are about hypothesis testing and generalization to the wider population that the study’s sampled participants represent. The last statistical step is checking whether the statistical analyses fulfill the assumptions of that particular statistical test and model to avoid violation and misleading results. These assumptions include evaluating normality, variance homogeneity, and residuals included in the final statistical model. Other statistical values such as Akaike information criterion, variance inflation factor/tolerance, and R2 are also considered when choosing the best-fitted models. Transforming raw data could be done, or a higher level of statistical analyses can be used (e.g., generalized linear models and mixed-effect modeling). Successful statistical analysis allows conclusions of the study to fit the data. 

Bayesian and Frequentist statistical frameworks: Most of the current clinical research reporting is based on the frequentist approach and hypotheses testing p values and confidence intervals. The frequentist approach assumes the acquired data are random, attained by random sampling, through randomized experiments or influences, and with random errors. The distribution of the data (its point estimate and confident interval) infers a true parameter in the real population. The major conceptual difference between Bayesian statistics and frequentist statistics is that in Bayesian statistics, the parameter (i.e., the studied variable in the population) is random and the data acquired is real (true or fix). Therefore, the Bayesian approach provides a probability interval for the parameter. The studied parameter is random because it could vary and be affected by prior beliefs, experience or evidence of plausibility. In the Bayesian statistical approach, this prior belief or available knowledge is quantified into a probability distribution and incorporated into the acquired data to get the results (i.e., the posterior distribution). This uses mathematical theory of Bayes’ Theorem to “turn around” conditional probabilities.

The goal of research reporting is to present findings succinctly and timely via conference proceedings or journal publication. Concise and explicit language use, with all the necessary details to enable replication and judgment of the study applicability, are the guiding principles in clinical studies reporting.

Writing for Reporting

Medical writing is very much a technical chore that accommodates little artistic expression. Research reporting in medicine and health sciences emphasize clear and standardized reporting, eschewing adjectives and adverbs extensively used in popular literature. Regularly reviewing published journal articles can familiarize authors with proper reporting styles and help enhance writing skills. Authors should familiarize themselves with standard, concise, and appropriate rhetoric for the intended audience, which includes consideration for journal reviewers, editors, and referees. However, proper language can be somewhat subjective. While each publication may have varying requirements for submission, the technical requirements for formatting an article are usually available via author or submission guidelines provided by the target journal. 

Research reports for publication often contain a title, abstract, introduction, methods, results, discussion, and conclusions section, and authors may want to write each section in sequence. However, best practices indicate the abstract and title should be written last. Authors may find that when writing one section of the report, ideas come to mind that pertains to other sections, so careful note taking is encouraged. One effective approach is to organize and write the result section first, followed by the discussion and conclusions sections. Once these are drafted, write the introduction, abstract, and the title of the report. Regardless of the sequence of writing, the author should begin with a clear and relevant research question to guide the statistical analyses, result interpretation, and discussion. The study findings can be a motivator to propel the author through the writing process, and the conclusions can help the author draft a focused introduction.

Writing for Publication

Specific recommendations on effective medical writing and table generation are available [ 32 ]. One such resource is Effective Medical Writing: The Write Way to Get Published, which is an updated collection of medical writing articles previously published in the Singapore Medical Journal [ 33 ]. The British Medical Journal’s Statistics Notes series also elucidates common and important statistical concepts and usages in clinical studies. Writing guides are also available from individual professional societies, journals, or publishers such as Chest (American College of Physicians) medical writing tips, PLoS Reporting guidelines collection, Springer’s Journal Author Academy, and SAGE’s Research methods [ 34 - 37 ]. Standardized research reporting guidelines often come in the form of checklists and flow diagrams. Table ​ Table6 6 presents a list of reporting guidelines. A full compilation of these guidelines is available at the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network website [ 38 ] which aims to improve the reliability and value of medical literature by promoting transparent and accurate reporting of research studies. Publication of the trial protocol in a publicly available database is almost compulsory for publication of the full report in many potential journals.

No. Reporting Guidelines and Checklists
  CONSORT - CONsolidated Standards Of Reporting Trials
A 25-item checklist for reporting of randomized controlled trials. There are appropriate extensions to the CONSORT statement due to variations in the standard trial methodology such as different design aspects (e.g., cluster, pragmatic, non-inferiority and equivalence trials), interventions (e.g., herbals) and data (e.g., harms, including the extension for writing abstracts)
SPIRIT - Standard Protocol Items: Recommendations for Interventional Trials
A 33-item checklist for reporting protocols for randomized controlled trials
  COREQ - COnsolidated criteria for REporting Qualitative research
A 32-item checklist for reporting qualitative research of interviews and focus groups
  STARD - STAndards for the Reporting of Diagnostic accuracy studies
A 25-item checklist for reporting of diagnostic accuracy studies
  PRISMA - Preferred Reporting Items for Systematic reviews and Meta-Analyses
A 27-item checklist for reporting of systematic reviews
PRISMA-P - Preferred Reporting Items for Systematic reviews and Meta-Analyses Protocols
A 17-item checklist for reporting of systematic review and meta-analysis protocols
MOOSE - Meta-analysis Of Observational Studies in Epidemiology
A 35-item checklist for reporting of meta-analyses of observational studies
  STROBE - STrengthening the Reporting of OBservational studies in Epidemiology
For reporting of observational studies in epidemiology
  Checklist for cohort, case-control and cross-sectional studies (combined)
  Checklist for cohort studies
  Checklist for case-control studies
  Checklist for cross-sectional studies
Extensions of the STROBE statement
STROME-ID - STrengthening the Reporting Of Molecular Epidemiology for Infectious Diseases
A 42-item checklist
STREGA - STrengthening the REporting of Genetic Associations
A 22-item checklist for reporting of gene-disease association studies
  CHEERS - Consolidated Health Economic Evaluation Reporting Standards
A 24-item checklist for reporting of health economic evaluations

Graphics and Tables

Graphics and tables should emphasize salient features of the underlying data and should coherently summarize large quantities of information. Although graphics provide a break from dense prose, authors must not forget that these illustrations should be scientifically informative, not decorative. The titles for graphics and tables should be clear, informative, provide the sample size, and use minimal font weight and formatting only to distinguish headings, data entry or to highlight certain results. Provide a consistent number of decimal points for the numerical results, and with no more than four for the P value. Most journals prefer cell-delineated tables created using the table function in word processing or spreadsheet programs. Some journals require specific table formatting such as the absence or presence of intermediate horizontal lines between cells.

Decisions of authorship are both sensitive and important and should be made at an early stage by the study’s stakeholders. Guidelines and journals’ instructions to authors abound with authorship qualifications. The guideline on authorship by the International Committee of Medical Journal Editors is widely known and provides a standard used by many medical and clinical journals [ 39 ]. Generally, authors are those who have made major contributions to the design, conduct, and analysis of the study, and who provided critical readings of the manuscript (if not involved directly in manuscript writing). 

Picking a target journal for submission

Once a report has been written and revised, the authors should select a relevant target journal for submission. Authors should avoid predatory journals—publications that do not aim to advance science and disseminate quality research. These journals focus on commercial gain in medical and clinical publishing. Two good resources for authors during journal selection are Think-Check-Submit and the defunct Beall's List of Predatory Publishers and Journals (now archived and maintained by an anonymous third-party) [ 40 , 41 ]. Alternatively, reputable journal indexes such as Thomson Reuters Journal Citation Reports, SCOPUS, MedLine, PubMed, EMBASE, EBSCO Publishing's Electronic Databases are available areas to start the search for an appropriate target journal. Authors should review the journals’ names, aims/scope, and recently published articles to determine the kind of research each journal accepts for publication. Open-access journals almost always charge article publication fees, while subscription-based journals tend to publish without author fees and instead rely on subscription or access fees for the full text of published articles.

Conclusions

Conducting a valid clinical research requires consideration of theoretical study design, data collection design, and statistical analysis design. Proper study design implementation and quality control during data collection ensures high-quality data analysis and can mitigate bias and confounders during statistical analysis and data interpretation. Clear, effective study reporting facilitates dissemination, appreciation, and adoption, and allows the researchers to affect real-world change in clinical practices and care models. Neutral or absence of findings in a clinical study are as important as positive or negative findings. Valid studies, even when they report an absence of expected results, still inform scientific communities of the nature of a certain treatment or intervention, and this contributes to future research, systematic reviews, and meta-analyses. Reporting a study adequately and comprehensively is important for accuracy, transparency, and reproducibility of the scientific work as well as informing readers.

Acknowledgments

The author would like to thank Universiti Putra Malaysia and the Ministry of Higher Education, Malaysia for their support in sponsoring the Ph.D. study and living allowances for Boon-How Chew.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The materials presented in this paper is being organized by the author into a book.

IMAGES

  1. 15 Experimental Design Examples (2024)

    experimental design in medical research

  2. Study designs in biomedical research: an introduction to the different

    experimental design in medical research

  3. Experimental Design Rubric

    experimental design in medical research

  4. Experimental Study Design: Types, Methods, Advantages

    experimental design in medical research

  5. What Is An Experimental Experiment

    experimental design in medical research

  6. Study Design

    experimental design in medical research

VIDEO

  1. pre -experimental research design( Experemental Research design)

  2. Needs of Experimental Design

  3. Day 1: Design of Experiments in Pharmaceutical Research & Development A Primer for Academia

  4. Motivation for MAMS trials by Mahesh Parmar

  5. Types of Experimental Research Design (MPC-005)

  6. Medicinal Chemistry 1

COMMENTS

  1. Clinical research study designs: The essentials

    Experimental study designs can be divided into 3 broad categories: clinical trial, community trial, field trial. The specifics of each study design are explained below ... The ethics of placebo‐controlled studies is complex and remains a debate in the medical research community. According to the Declaration of Helsinki on the use of placebo ...

  2. Study designs in biomedical research: an introduction to the different

    We may approach this study by 2 longitudinal designs: Prospective: we follow the individuals in the future to know who will develop the disease. Retrospective: we look to the past to know who developed the disease (e.g. using medical records) This design is the strongest among the observational studies. For example - to find out the relative ...

  3. Study Designs in Medicine

    This study can help authors understand study designs in medicine. Scientific studies can be classified as "Basic Studies", "Observational Studies", "Experimental (Interventional) Studies", "Economic Evaluations" and "Meta-Analysis - Systematic Review", as shown in Figure 1. FIG.

  4. Guide to Experimental Design

    In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment. In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same ...

  5. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  6. Experimental Design

    According to Campbell and Stanley , there are three basic types of true experimental designs: (1) pretest-posttest control group design, (2) Solomon four-group design, and (3) posttest-only control group design. The pretest-posttest control group design is the most widely used design in medical, social, educational, and psychological research ...

  7. PDF Study designs in medical research

    Study design is the procedure under which a study is carried out Study design is the procedure under which a study is. Two main categories. •Observation: •Identify subjects, then. •Observe and record characteristics. •Experiment. •Identify subjects, •Place in common context, •Intervene, then.

  8. Design of Experimental Studies in Biomedical Sciences

    Experimental study is a multidisciplinary type of research that helps to determine different causation. An appropriate study design based on the purpose of experiment causes more reliable results. Pilot studies can provide lots of information about the feasibility of the study.

  9. Experimental Study Design

    This chapter discusses experimental study design. The research subjects in experimental studies can be randomly allocated to different treatment groups, and ideally, the conditions other than the treatment of interest between groups are controlled to examine a causal effect of the treatment. Applied to different biomedical research fields ...

  10. Chapter 4. Experimental Study Designs

    Experimental study designs are the primary method for testing the effectiveness of new therapies and other interventions, including innovative drugs. By the 1930s, the pharmaceutical industry had adopted experimental methods and other research designs to develop and screen new compounds, improve production outputs, and test drugs for ...

  11. Experimental Design

    Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design: Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new ...

  12. Types of studies and research design

    Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research ...

  13. Research study designs: Experimental and quasi-experimental

    Research study designs: Experimental and quasi-experimental. The first article in this series discussed developing an area of general interest and generating a proposed research question or hypothesis. The second article discussed reviewing the relevant body of literature on the subject and confirming that the research question is an ...

  14. Research Study Design

    Medical research studies have a number of possible designs. A strong research project closely ties the research questions/hypotheses to the methodology to be used, the variables to be measured or manipulated, and the planned analysis of collected data. ... An example of an experimental design would be randomly assigning patients with congestive ...

  15. What is a Research Design? Definition, Types, Methods and Examples

    A research design is defined as the overall plan or structure that guides the process of conducting research. Learn more about research design types, methods and examples. ... Experimental Research Design: Mastering Controlled Trials. Delve into the heart of experimentation with Randomized Controlled Trials (RCTs). By randomizing participants ...

  16. Types of Experimental Research Designs in Biomedical Research

    Experimental designs are fundamental to biomedical research. They are used to investigate the effectiveness of interventions, the effects of treatments, and the relationships between variables. Appropriate choice of experimental design is critical for a study to produce valid and reliable results. This article gives an overview of the different types of experimental designs used in biomedical ...

  17. Study design in medical research: part 2 of a series on the ...

    Background: The scientific value and informativeness of a medical study are determined to a major extent by the study design. Errors in study design cannot be corrected afterwards. Various aspects of study design are discussed in this article. Methods: Six essential considerations in the planning and evaluation of medical research studies are presented and discussed in the light of selected ...

  18. Experimental Research Designs: Types, Examples & Advantages

    There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design. 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2.

  19. How Common Are Experimental Designs in Medical Education

    perts in education research note that experimental designs largely are incompatible with educational studies due to various contextual, legal, and ethical issues. Purpose: We sought to investigate the frequency with which experimental designs have been utilized in recent medical education dissertations and theses. Methods: A bibliometric analysis of dissertations and theses completed in the ...

  20. Study/Experimental/Research Design: Much More Than Statistics

    A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. Keywords: scientific writing, scholarly communication. Study, experimental, or research design is the backbone of good research.

  21. Reflections on experimental research in medical education

    As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders.

  22. WMA Declaration of Helsinki

    The design and performance of each research study involving human subjects must be clearly described and justified in a research protocol. The protocol should contain a statement of the ethical considerations involved and should indicate how the principles in this Declaration have been addressed. ... In medical research involving human subjects ...

  23. Clinical research study designs: The essentials

    Experimental study designs can be divided into 3 broad categories: clinical trial, community trial, field trial. The specifics of each study design are explained below ... The ethics of placebo-controlled studies is complex and remains a debate in the medical research community. According to the Declaration of Helsinki on the use of placebo ...

  24. Study Design in Medical Research

    Medical research studies can be split into five phases—planning, performance, documentation, analysis, and publication ( 1, 2 ). Aside from financial, organizational, logistical and personnel questions, scientific study design is the most important aspect of study planning. The significance of study design for subsequent quality, the ...

  25. Science and the scientific method: Definitions and examples

    Science is a systematic and logical approach to discovering how things in the universe work. Scientists use the scientific method to make observations, form hypotheses and gather evidence in an ...

  26. Assessing rates and predictors of cannabis-associated ...

    The authors synthesize data from previous literature on observational, experimental and medicinal cannabis research to assess rates and predictors of cannabis-associated psychotic symptoms.

  27. Planning and Conducting Clinical Research: The Whole Process

    This article will review the proper methods to conduct medical research from the planning stage to submission for publication ... (i.e., the control group). Thus, the widely known RCT is classified as an experimental design in data collection. An experimental study design without randomization is referred to as a quasi-experimental study ...